AIXI contains sentient minds, but isn’t itself sentient. I suspect there are designs of minds that are highly competent at many problems, and have a mental architecture totally different from humans. Such that if we had a clearer idea what we meant by “sentient”, we would agree the AI wasn’t sentient.
Also, how long do we have sentient AI before singularity. If the first sentient AI is a paperclipper that destroys the world, any bill of “sentient AI rights” is pragmatically useless.
AIXI contains sentient minds, but isn’t itself sentient. I suspect there are designs of minds that are highly competent at many problems, and have a mental architecture totally different from humans. Such that if we had a clearer idea what we meant by “sentient”, we would agree the AI wasn’t sentient.
Also, how long do we have sentient AI before singularity. If the first sentient AI is a paperclipper that destroys the world, any bill of “sentient AI rights” is pragmatically useless.