I’m getting the impression that “consciousness” is inherently not well defined; that is, there is no singular thing we can point to that will meaningfully determine whether or not something is “conscious”.
In this sense, consciousness might be a red herring. A similar but more concrete question worth asking: what behaviours would an AI agent have to exhibit for you to want it to be granted fundamental rights/autonomy? Or otherwise for it to be intrinsically unethical to create and run an instance of it?
I’m getting the impression that “consciousness” is inherently not well defined; that is, there is no singular thing we can point to that will meaningfully determine whether or not something is “conscious”.
In this sense, consciousness might be a red herring. A similar but more concrete question worth asking: what behaviours would an AI agent have to exhibit for you to want it to be granted fundamental rights/autonomy? Or otherwise for it to be intrinsically unethical to create and run an instance of it?