Mark M.,
“His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates). Why not respect that?”
Values may be misplaced, and they have consequences. This particular issue doesn’t have much riding on it (on the face of it, anyway), but many do. Moreover, how we think is in many ways as important as what we think. The fellows ad hoc moves are problematic. Ad hoc adjustments to our theories/beliefs to avoid disconfirmation are like confirmation bias and other fallacies and biases—they are hurdles creativity, making better decisions and increasing our understanding of ourselves and the world. This all sounds more hard-nosed than I really am, but you get the point.
“By definition, wouldn’t our AI friend have clearly defined rules that tell us what it believes?”
You seem to envision AI as a massive database of scripts chosen according to circumstance, but this is not feasible. The number of possible scripts to enable intelligent behavior would be innumerable. No, an AI need not have “clearly defined rules” in the sense of being intelligible to humans. I suspect anything robust enough to pass the Turing Test in any meaningful (non-domain restricted) sense would be either too complicated to decode or predict upon its inspection, or would be the result of some artificial evolutionary process that would be no more decodable than a brain. Have you ever looked at complex code—it can be difficult if not impossible for a person to understand as code, let alone all the possible ways it may implement (thus bugs, infinite loops, etc.). As Turing said, “Machines take me by surprise with great frequency.”
“You’ll just have to take my word for it that I had other unquantifiable impulses.”
But you would not take the word of an AI that exhibited human level robustness in its actions? Why?
“I think you might be misapplying the Turing test. Let’s frame this as a statistical problem. When you perform analysis, you separate factors into those that have predictive power and those that don’t. A successful Turing test would tell us that a perfect predictive formula is possible, and that we might be able to ignore some factors that don’t help us anticipate behaviour. It wouldn’t tell us that those factors don’t exist however.”
Funny, I’m afraid that you might be misapplying the Turing Test. The Turing Test is not supposed to provide a maximally predictive “formula” for a putative intelligence. Rather, passing it is arguably supposed to demonstrate that the subject is, in some substantive sense of the word, intelligent.
Where do people get the impression that we all have the right not to be challenged in our beliefs? Tolerance is not about letting every person’s ideas go unchallenged; it’s about refraining from other measures (enforced conformity, violence) when faced with intractable personal differences.
As for politeness, it is an overrated virtue. We cannot have free and open discussions, if we are chained to the notion that we should not challenge those that cannot countenance dissent, or that we should be free from the dissent of others. Some people should be challenged often and publicly. Of course, the civility of these exchanges matters, but, as presented by Eliezer, no serious conversational fouls or fallacies were committed in this case (contemptuous tone, ad hominems, tu quoque or other Latinate no-nos, etc.).
Mark D,
How do you know what the putative AI “believes” about what is advantageous or logical? How do you know that other humans are feeling compassion? In other words, how you feel about the Turing test, and how, other than their behavior, would you be able to know about what people or AIs believe and feel?