I don’t believe that chatbots are already conscious. Yet I do think we’ll be able to tell. Specifically I think we’ll be able to trace back the specific kind of neural processing that generates beliefs and reports about consciousness, and then see which functional properties this process has that makes it unique compared to non-conscious processing. Then we can look into chatbots brains and see if they’re doing this processing (i.e. see if they’re saying their conscious because they have the mental states we do, or if they’re just mimicing our reports of our own mental states without any of their own)
Do you think you can do this with a human right now? I’m having trouble parsing what this actually would look like.
It also seems like circular logic for your test to rely on “which functional properties this process has that makes it unique compared to non-conscious processing” when the whole challenge is that there apparently isn’t a clear line between conscious and none-conscious processing.
Point 2) is why I wrote the story. In a conversation about the potential for AI rights, some friends and I came to the disconcerting conclusion that it’s kinda impossible to justify your own consciousness (to other people). That unnerving thought prompted the story, since if we ourselves can’t justify our consciousness, how can we reasonably expect an AI to do so?
The point isn’t that chatbots are indistinguishable from humans. It’s that either
Chatbots are already conscious
Or
There’ll be no way to tell if one day they are.
Both should be deeply concerning (assuming you think it is theoretically possible for a chatbot to be conscious).
I do not think this is true.
I don’t believe that chatbots are already conscious. Yet I do think we’ll be able to tell. Specifically I think we’ll be able to trace back the specific kind of neural processing that generates beliefs and reports about consciousness, and then see which functional properties this process has that makes it unique compared to non-conscious processing. Then we can look into chatbots brains and see if they’re doing this processing (i.e. see if they’re saying their conscious because they have the mental states we do, or if they’re just mimicing our reports of our own mental states without any of their own)
Do you think you can do this with a human right now? I’m having trouble parsing what this actually would look like.
It also seems like circular logic for your test to rely on “which functional properties this process has that makes it unique compared to non-conscious processing” when the whole challenge is that there apparently isn’t a clear line between conscious and none-conscious processing.
Yair, you are correct.
Point 2) is why I wrote the story. In a conversation about the potential for AI rights, some friends and I came to the disconcerting conclusion that it’s kinda impossible to justify your own consciousness (to other people). That unnerving thought prompted the story, since if we ourselves can’t justify our consciousness, how can we reasonably expect an AI to do so?