A number of hypotheses that I would label “strong AI” don’t claim that type-2 systems are impossible, they merely claim that type-1 systems can be constructed by means other than gestating a zygote.
I thought the value of the strong AI hypothesis was that you didn’t have to wonder if you had created true consciousness or just a simulation of consciousness. That the essence of the consciousness was somehow built in to the patterns of the consciousness no matter how they were instantiated, once you saw those patterns working you knew you had a consciousness.
Your weaker version doesn’t have that advantage. If all I know is something that does everything a consciousness does MIGHT be a consciousness, then I am still left with the burden of figuring out how to distinguish real consciousnesses from simulations of consciousnesses.
An underappreciated aspect of these issues is the red herring thrown in by some students of the philosophy of science. Science rightly says about ALMOST everything: “if it looks like a duck and it sounds like a duck and it feels like a duck and it tastes like a duck, and it nourishes me when I eat it, then it is a duck.” But consciousness is different. From the point of view of a dictatorial leader, it is not different. If dictator can build a clone army and/or a clone workforce, what would he possibly care if they are truly conscious or not or only simulations of consciousness? It is only sombody who believes we should treat consciousnesses differently than we treat uncsonclous objects.
I continue to think it doesn’t much matter what we attach the label “strong AI” to. It’s fine with me if you’d prefer to attach that label only to theories that, if true, mean we are spared the burden of figuring out how to distinguish real consciousness from non-conscious simulations of consciousness.
Regardless of labels: yes, if it’s important to me to treat those two things differently, then it’s also important to me to be able to tell the difference.
I thought the value of the strong AI hypothesis was that you didn’t have to wonder if you had created true consciousness or just a simulation of consciousness. That the essence of the consciousness was somehow built in to the patterns of the consciousness no matter how they were instantiated, once you saw those patterns working you knew you had a consciousness.
Your weaker version doesn’t have that advantage. If all I know is something that does everything a consciousness does MIGHT be a consciousness, then I am still left with the burden of figuring out how to distinguish real consciousnesses from simulations of consciousnesses.
An underappreciated aspect of these issues is the red herring thrown in by some students of the philosophy of science. Science rightly says about ALMOST everything: “if it looks like a duck and it sounds like a duck and it feels like a duck and it tastes like a duck, and it nourishes me when I eat it, then it is a duck.” But consciousness is different. From the point of view of a dictatorial leader, it is not different. If dictator can build a clone army and/or a clone workforce, what would he possibly care if they are truly conscious or not or only simulations of consciousness? It is only sombody who believes we should treat consciousnesses differently than we treat uncsonclous objects.
I continue to think it doesn’t much matter what we attach the label “strong AI” to. It’s fine with me if you’d prefer to attach that label only to theories that, if true, mean we are spared the burden of figuring out how to distinguish real consciousness from non-conscious simulations of consciousness.
Regardless of labels: yes, if it’s important to me to treat those two things differently, then it’s also important to me to be able to tell the difference.