Since other people are biologically similar to me, they probably say “I’m conscious” for the same reason as me, so it makes sense to believe them. The problem in Chinese Room is that the system is quite different from a human and might be lying about some things, so there’s less reason to trust it when it claims to have human-like qualia.
I can’t agree with you because you can only assert that a person is biologically similar to you based on how they look and feel, barring cutting into them. If I were to design a robot that looked and felt and talked similar to a human being enough that you would have no way of discerning whether it’s a real human or not, then you’re saying that you would be inclined to believe them.
I admit I don’t have an answer to this problem, I just don’t agree with your statement.
I would believe the computer, not because of accepting computationalism, but because when I imagine the situation happening in real life, I cannot imagine continuing to say to someone or something, “Actually, I’m not sure you’re really conscious,” when it acts like it is in every way.
I actually think the same thing is likely to happen to almost everyone (that is, in the end accepting that it is conscious), regardless of their previous philosophical views.
Right. I’ve said before that we don’t need the experiment. We already know people will let out an AI that seems decent and undeserving of being in a box.
Since other people are biologically similar to me, they probably say “I’m conscious” for the same reason as me, so it makes sense to believe them. The problem in Chinese Room is that the system is quite different from a human and might be lying about some things, so there’s less reason to trust it when it claims to have human-like qualia.
Be careful (2, 3).
I can’t agree with you because you can only assert that a person is biologically similar to you based on how they look and feel, barring cutting into them. If I were to design a robot that looked and felt and talked similar to a human being enough that you would have no way of discerning whether it’s a real human or not, then you’re saying that you would be inclined to believe them.
I admit I don’t have an answer to this problem, I just don’t agree with your statement.
I would believe the computer, not because of accepting computationalism, but because when I imagine the situation happening in real life, I cannot imagine continuing to say to someone or something, “Actually, I’m not sure you’re really conscious,” when it acts like it is in every way.
I actually think the same thing is likely to happen to almost everyone (that is, in the end accepting that it is conscious), regardless of their previous philosophical views.
Yeah, that’s how Justin Corwin won twenty AI-box experiments.
Right. I’ve said before that we don’t need the experiment. We already know people will let out an AI that seems decent and undeserving of being in a box.