If the training data doesn’t have any mention of consciousness, the training process can still encourage all of the subsidiary mental faculties that we lump together under the label “consciousness”—memory, self-reflection both automatic and deliberate, modulation of behavior in different circumstances, monitoring of the environment and connecting that monitoring to other faculties, etc.
But of course, AI doesn’t have to do all these things in the same way humans do them, nor do its relative skill levels in each faculty have to be the same as humans’. You could have an AI that did most things in a human-like way except that it was 10x better at connecting senses to emotions but 10x worse at remembering what happened yesterday.
So “conscious or not” is not a one-dimensional thing. Asking whether some AI is conscious can be a lot like asking if a submarine swims.
When people propose tests for consciousness, one shouldn’t take this as getting at some underlying binary truth about whether consciousness is entirely-there or entirely-not-there. It’s more like a handle to help grapple with how much we care about different sorts of AI in the same sorts of ways we care about other humans.
Also, you’re using “architecture” in a loose way here, and I mostly responded to that. But it’s also an interesting question how much “architecture” in the sense of the gross wiring diagram of the NN changes consciousness. I would say that feed-forward models are a lot less conscious, and that I’d care more about recurrent models with a rich internal state, even if they were able to generate similar text.
The way I’m using consciousness, I only mean an internal experience- not memory or self-reflection or something else in that vein. I don’t know if experience and those cognitive traits have a link or what character that link would be. It would probably be pretty hard to determine if something was having an internal experience if it didn’t have memory or self-reflection, but those are different buckets in my model.
If the training data doesn’t have any mention of consciousness, the training process can still encourage all of the subsidiary mental faculties that we lump together under the label “consciousness”—memory, self-reflection both automatic and deliberate, modulation of behavior in different circumstances, monitoring of the environment and connecting that monitoring to other faculties, etc.
But of course, AI doesn’t have to do all these things in the same way humans do them, nor do its relative skill levels in each faculty have to be the same as humans’. You could have an AI that did most things in a human-like way except that it was 10x better at connecting senses to emotions but 10x worse at remembering what happened yesterday.
So “conscious or not” is not a one-dimensional thing. Asking whether some AI is conscious can be a lot like asking if a submarine swims.
When people propose tests for consciousness, one shouldn’t take this as getting at some underlying binary truth about whether consciousness is entirely-there or entirely-not-there. It’s more like a handle to help grapple with how much we care about different sorts of AI in the same sorts of ways we care about other humans.
Also, you’re using “architecture” in a loose way here, and I mostly responded to that. But it’s also an interesting question how much “architecture” in the sense of the gross wiring diagram of the NN changes consciousness. I would say that feed-forward models are a lot less conscious, and that I’d care more about recurrent models with a rich internal state, even if they were able to generate similar text.
The way I’m using consciousness, I only mean an internal experience- not memory or self-reflection or something else in that vein. I don’t know if experience and those cognitive traits have a link or what character that link would be. It would probably be pretty hard to determine if something was having an internal experience if it didn’t have memory or self-reflection, but those are different buckets in my model.