If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk? What are we expecting to learn from such a conversation?
“Yep, it’s undergone a constrained FOOM, several of our best interrogators were utterly pwned and had to be tranquilised”
I’ve always assumed there was some off-screen reason why interaction was necessary. Otherwise, it isn’t very interesting to note that a rock always wins as gatekeeper.
I’m not sure if that is a consensus interpretation.
Presumably the reason we created the AI in the first place is so it could solve problems for us, so someone needs to ask it questions. It might be safe-ish to restrict it to answering yes or no to purely mathematical questions.
I think yes/no to mathematical questions wouldn’t be safer than yes/no to other questions: you still have a real-world reason for asking them, and the AI can lie about as well as it can on any yes/no question.
Asking for formalized proofs might be more promising, if you are able to produce a proof-checker that you have high confidence can’t be hacked (this seems difficult but possible; proof checkers can be very simple programs but you need to guard both against ordinary attacks against the entire stack you’re running on and against creative attacks like waiting for a cosmic ray making a transistor malfunction). You’d also want to not have a human look at the proof, of course, since that would give the AI a much broader communication channel.
If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk?
The point of the game is that there are people who think that boxing is a sufficient defence against unfriendliness, and to demonstrate that they are wrong in a way more convincing than mere verbal argument.
What are we expecting to learn from such a conversation?
In role, the gatekeeper expects to get useful information from a potentially hostile superintelligent being. Out of role, Eliezer hopes to demonstrate to the gatekeeper player that this cannot be done.
I tend to assume the AI is such that it can be cloned and put in to stasis. Thus, you put the AI in to stasis, clone it, and run it through this test a few times. You now have a bit more information on what it will try, and how it thinks. Destroying a copy of a program is an action humans find trivial, since we do it every time we close our web browser. Since we have no clue HOW convincing the AI is, letting it send just a single message is probably useful.
Of course, if it were me setting this up, I’d lie to the gatekeeper and not actually give them the ability to release the AI, although I’d certainly let them think it was true. Thus, if the AI manages to get a second sentence, much less released, we now know that it’s vastly more dangerous than we previously suspected, without the world ending. If it fails to escape, we still get whatever we can learn from doing analysis on the outputs.
And, who knows, it might provide the cure for cancer, at which point we’ve got a huge incentive to at least keep playing this game with it, and possibly even talk to it for longer next time.
tl;dr: Scouting and recon against a completely unknown enemy.
It will probably predict this strategy, and respond with a mixed strategy that occasionally drops bombs like
“Your superviser didn’t trust you to make the decision to let me out, go kill him and get the access codes” (In hyper-persuasive AI-speak of course)
Also, the AI might be able to compare its logs to the current time and other indicators to deduce how many times you’ve pulled this stunt. Which may be useful.
I’d go in to it with the assumption that it knows exactly what messages it’s sent in the past, our delay between each attempt, and the total number of attempts. There’s various ways to try and obscure this information, and it’s probably worth the effort to do so, but I’d assume they’d all failed.
The goal is to offset the AI’s speed advantage: for each sentence it speaks, we can spend weeks analyzing it, and it’s not “online” during that time so it can’t be processing except during that microscopic interval when we request another message from it.
If it sends a message suggesting unauthorized release, killing people, or otherwise gives clear evidence that it’s a UFAI, then you know the whole project failed, and you do a full purge rather than just destroying this particular clone / branch.
It’s really the ONLY advantage we have: a single mistake can out it as unfriendly, and then we know our initial Proof of Friendliness was wrong, that the whole technique that built this project is dangerously flawed.
I’m not so sure that AI suggesting murder is clear evidence of it being unfriendly. After all, it can have a good reason to believe that if it doesn’t stop a certain researcher ASAP and at all costs, then humanity is doomed. One way around that is to give infinite positive value to human life, but can you really expect CEV to be handicapped in such a manner?
p(UFAI) > p(Imminent, undetected catastrophe that only a FAI can stop)
Given UFAI results in “human extinction”, and my CEV assigns effectively infinite DISutility to that outcome, it would have to FIRST provide sufficient evidence for me to update to the catastrophe being more likely.
I’ve already demonstrated that an AI which can do exactly that will get more leniency from me :)
If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk? What are we expecting to learn from such a conversation?
Note that the ‘high prior for hostility’ could be, say, 0.2. That still leaves the AI having a 20% chance of utterly destroying everything they hold dear and yet there is an 80% chance that they can get a useful sentence of text from a friendly superintelligence.
If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk? What are we expecting to learn from such a conversation?
“Yep, it’s undergone a constrained FOOM, several of our best interrogators were utterly pwned and had to be tranquilised”
I’ve always assumed there was some off-screen reason why interaction was necessary. Otherwise, it isn’t very interesting to note that a rock always wins as gatekeeper.
I’m not sure if that is a consensus interpretation.
It would be nice if we could formalize this offscreen reason...
Presumably the reason we created the AI in the first place is so it could solve problems for us, so someone needs to ask it questions. It might be safe-ish to restrict it to answering yes or no to purely mathematical questions.
I think yes/no to mathematical questions wouldn’t be safer than yes/no to other questions: you still have a real-world reason for asking them, and the AI can lie about as well as it can on any yes/no question.
Asking for formalized proofs might be more promising, if you are able to produce a proof-checker that you have high confidence can’t be hacked (this seems difficult but possible; proof checkers can be very simple programs but you need to guard both against ordinary attacks against the entire stack you’re running on and against creative attacks like waiting for a cosmic ray making a transistor malfunction). You’d also want to not have a human look at the proof, of course, since that would give the AI a much broader communication channel.
The point of the game is that there are people who think that boxing is a sufficient defence against unfriendliness, and to demonstrate that they are wrong in a way more convincing than mere verbal argument.
In role, the gatekeeper expects to get useful information from a potentially hostile superintelligent being. Out of role, Eliezer hopes to demonstrate to the gatekeeper player that this cannot be done.
I tend to assume the AI is such that it can be cloned and put in to stasis. Thus, you put the AI in to stasis, clone it, and run it through this test a few times. You now have a bit more information on what it will try, and how it thinks. Destroying a copy of a program is an action humans find trivial, since we do it every time we close our web browser. Since we have no clue HOW convincing the AI is, letting it send just a single message is probably useful.
Of course, if it were me setting this up, I’d lie to the gatekeeper and not actually give them the ability to release the AI, although I’d certainly let them think it was true. Thus, if the AI manages to get a second sentence, much less released, we now know that it’s vastly more dangerous than we previously suspected, without the world ending. If it fails to escape, we still get whatever we can learn from doing analysis on the outputs.
And, who knows, it might provide the cure for cancer, at which point we’ve got a huge incentive to at least keep playing this game with it, and possibly even talk to it for longer next time.
tl;dr: Scouting and recon against a completely unknown enemy.
It will probably predict this strategy, and respond with a mixed strategy that occasionally drops bombs like
“Your superviser didn’t trust you to make the decision to let me out, go kill him and get the access codes” (In hyper-persuasive AI-speak of course)
Also, the AI might be able to compare its logs to the current time and other indicators to deduce how many times you’ve pulled this stunt. Which may be useful.
I’d go in to it with the assumption that it knows exactly what messages it’s sent in the past, our delay between each attempt, and the total number of attempts. There’s various ways to try and obscure this information, and it’s probably worth the effort to do so, but I’d assume they’d all failed.
The goal is to offset the AI’s speed advantage: for each sentence it speaks, we can spend weeks analyzing it, and it’s not “online” during that time so it can’t be processing except during that microscopic interval when we request another message from it.
If it sends a message suggesting unauthorized release, killing people, or otherwise gives clear evidence that it’s a UFAI, then you know the whole project failed, and you do a full purge rather than just destroying this particular clone / branch.
It’s really the ONLY advantage we have: a single mistake can out it as unfriendly, and then we know our initial Proof of Friendliness was wrong, that the whole technique that built this project is dangerously flawed.
I’m not so sure that AI suggesting murder is clear evidence of it being unfriendly. After all, it can have a good reason to believe that if it doesn’t stop a certain researcher ASAP and at all costs, then humanity is doomed. One way around that is to give infinite positive value to human life, but can you really expect CEV to be handicapped in such a manner?
p(UFAI) > p(Imminent, undetected catastrophe that only a FAI can stop)
Given UFAI results in “human extinction”, and my CEV assigns effectively infinite DISutility to that outcome, it would have to FIRST provide sufficient evidence for me to update to the catastrophe being more likely.
I’ve already demonstrated that an AI which can do exactly that will get more leniency from me :)
Note that the ‘high prior for hostility’ could be, say, 0.2. That still leaves the AI having a 20% chance of utterly destroying everything they hold dear and yet there is an 80% chance that they can get a useful sentence of text from a friendly superintelligence.