Perhaps a GLUT cannot actually pass the Turing Test. Consider the following extension to the thought experiment.
I have a dilemma. I must conduct a Turing Test. I have two identical rooms. You will be in one room. A GLUT will be in the other. At the end of the experiment, I must destroy one of the two rooms. The Turing Test forbids me to peer inside the rooms, and I only communicate with simple textual question/responses.
What can I do to save your life? What I would want to do is create a window between the two rooms. It would allow all the information in each room to be visible to the other. I’m not sure if this illegitimately mutates the Turing Test or not, but it does seem to avoid violating the critical rule in the Turing Test that the experimenter must not peer into the room. I then ask one of the two rooms randomly: “Please give me a single question/response I should expect from the other room.”
Assuming you are a rational person who actually wants to save your life, if I ask you this question, you will examine the GLUT, pick a single lookup, and give me the question/response. I will then ask the GLUT the question you gave me. The GLUT, being a helplessly deterministic lookup table, will have no option but to respond accordingly. I will then destroy the GLUT, and save your life. Conversely, if I ask the GLUT the question, I should expect that you—who wants to save your life, and who knows by looking through the window what the GLUT said you’d say, will answer anything other than what the GLUT said you would say. Either way, I can differentiate between you and the GLUT.
[Update: ciphergoth and FAW do a great job spotting the error in this intuition pump. To summarize, the GLUT, like you, can also include data from the window as input to its lookup table.]
I’m afraid this is just a misleading intuition pump. Eliezer has GLUT-reading powers, does he? Well, the GLUT has a body that it uses to type its responses in the Turing Test, and that body is capable of scanning the complete state of Eliezer’s brain, from which the GLUT’s enormous tables predict what he’s going to say next.
When does the GLUT’s scan occur? Before or after it has to start the Turing Test? If it does it beforehand, then it suffers predictability. But it can’t do it afterwards, without ceasing to fit the definition of a lookup table.
The point I’m making is that the difference you’re drawing between people and GLUTs isn’t really to do with their essential nature: it’s a more trivial asymmetry on things like how readable their state is and whether they have access to a private source of randomness. Fix these asymmetries and your problem case goes away.
A lookup table is stateless. The human is stateful. RAM beats ROM. This is not a trivial asymmetry but a fundamental asymmetry that enables the human to beat the GLUT. The algorithms:
Task + Question + state of the human → “Any Answer other than what the GLUT said I’d say”
If the human has looked up that particular output as well then that’s another input for the GLUT, and since the table includes all possible inputs this possibility is included as well, to infinite recursion.
Thanks for update! By “private source of randomness” I mean one that’s not available to the person on the other side of the window. Another way to look at it would be the sort of randomness you use to generate cryptographic keys—your adversary mustn’t have access to the randomness you draw from it.
Perhaps a GLUT cannot actually pass the Turing Test. Consider the following extension to the thought experiment.
I have a dilemma. I must conduct a Turing Test. I have two identical rooms. You will be in one room. A GLUT will be in the other. At the end of the experiment, I must destroy one of the two rooms. The Turing Test forbids me to peer inside the rooms, and I only communicate with simple textual question/responses.
What can I do to save your life? What I would want to do is create a window between the two rooms. It would allow all the information in each room to be visible to the other. I’m not sure if this illegitimately mutates the Turing Test or not, but it does seem to avoid violating the critical rule in the Turing Test that the experimenter must not peer into the room. I then ask one of the two rooms randomly: “Please give me a single question/response I should expect from the other room.”
Assuming you are a rational person who actually wants to save your life, if I ask you this question, you will examine the GLUT, pick a single lookup, and give me the question/response. I will then ask the GLUT the question you gave me. The GLUT, being a helplessly deterministic lookup table, will have no option but to respond accordingly. I will then destroy the GLUT, and save your life. Conversely, if I ask the GLUT the question, I should expect that you—who wants to save your life, and who knows by looking through the window what the GLUT said you’d say, will answer anything other than what the GLUT said you would say. Either way, I can differentiate between you and the GLUT.
[Update: ciphergoth and FAW do a great job spotting the error in this intuition pump. To summarize, the GLUT, like you, can also include data from the window as input to its lookup table.]
I’m afraid this is just a misleading intuition pump. Eliezer has GLUT-reading powers, does he? Well, the GLUT has a body that it uses to type its responses in the Turing Test, and that body is capable of scanning the complete state of Eliezer’s brain, from which the GLUT’s enormous tables predict what he’s going to say next.
When does the GLUT’s scan occur? Before or after it has to start the Turing Test? If it does it beforehand, then it suffers predictability. But it can’t do it afterwards, without ceasing to fit the definition of a lookup table.
The point I’m making is that the difference you’re drawing between people and GLUTs isn’t really to do with their essential nature: it’s a more trivial asymmetry on things like how readable their state is and whether they have access to a private source of randomness. Fix these asymmetries and your problem case goes away.
Thanks ciphergoth; I updated the original comment to allude to the error you spotted.
A lookup table is stateless. The human is stateful. RAM beats ROM. This is not a trivial asymmetry but a fundamental asymmetry that enables the human to beat the GLUT. The algorithms:
Stateless GLUT:
Question 1 → Answer 1
Question 2 → Answer 2
Question 3 → Answer 3
…
Stateful Human:
Any Question → Any Answer other than what the GLUT said I’d say
The human’s algorithm is bulletproof against answering predictably. The GLUT’s algorithm can only answer predictably.
P.S. I wasn’t entirely sure what you meant by “private source of randomness”. I also apologize if I’m slow to grasp any of your points.
GLUT:
Task + Question + state of the human → “Any Answer other than what the GLUT said I’d say”
If the human has looked up that particular output as well then that’s another input for the GLUT, and since the table includes all possible inputs this possibility is included as well, to infinite recursion.
The problem for the GLUT is that the “state of the human” is a function of the GLUT itself (the window causes the recursion).
And the human has exactly the same problem.
You’re right; got it. That’s also what ciphergoth was trying to tell me when he said that the asymmetries could be melted away.
Thanks for update! By “private source of randomness” I mean one that’s not available to the person on the other side of the window. Another way to look at it would be the sort of randomness you use to generate cryptographic keys—your adversary mustn’t have access to the randomness you draw from it.