I don’t believe that chatbots are already conscious. Yet I do think we’ll be able to tell. Specifically I think we’ll be able to trace back the specific kind of neural processing that generates beliefs and reports about consciousness, and then see which functional properties this process has that makes it unique compared to non-conscious processing. Then we can look into chatbots brains and see if they’re doing this processing (i.e. see if they’re saying their conscious because they have the mental states we do, or if they’re just mimicing our reports of our own mental states without any of their own)
Do you think you can do this with a human right now? I’m having trouble parsing what this actually would look like.
It also seems like circular logic for your test to rely on “which functional properties this process has that makes it unique compared to non-conscious processing” when the whole challenge is that there apparently isn’t a clear line between conscious and none-conscious processing.
I do not think this is true.
I don’t believe that chatbots are already conscious. Yet I do think we’ll be able to tell. Specifically I think we’ll be able to trace back the specific kind of neural processing that generates beliefs and reports about consciousness, and then see which functional properties this process has that makes it unique compared to non-conscious processing. Then we can look into chatbots brains and see if they’re doing this processing (i.e. see if they’re saying their conscious because they have the mental states we do, or if they’re just mimicing our reports of our own mental states without any of their own)
Do you think you can do this with a human right now? I’m having trouble parsing what this actually would look like.
It also seems like circular logic for your test to rely on “which functional properties this process has that makes it unique compared to non-conscious processing” when the whole challenge is that there apparently isn’t a clear line between conscious and none-conscious processing.