It becomes more complicated when the author of the proof is a superintelligence trying to exploit flaws in the verifier. Probably more importantly, you may not be able to formally verify that the “Friendliness” that the AI provably possesses is actually what you want.
True about the possibility that the AGI trying to trick you. But from what I understand the goal of SI is to come up with a verifiable FAI. You can specify whatever high standard of verifiability you want as the unboxing condition.
“You can specify whatever standard of verifiability you want” is vague. You can say “I want to be absolutely right about whether it’s Friendly”, but you can’t have that unless you know what Friendly means, and are smart enough to specify a standard for checking on it.
If you could be sure you had a cooperative AGI which could just give you an FAI, I think you’d have basically solved the problem of creating an FAI.....but that’s the problem you’re trying to get the AGI to solve for you.
It becomes more complicated when the author of the proof is a superintelligence trying to exploit flaws in the verifier. Probably more importantly, you may not be able to formally verify that the “Friendliness” that the AI provably possesses is actually what you want.
True about the possibility that the AGI trying to trick you. But from what I understand the goal of SI is to come up with a verifiable FAI. You can specify whatever high standard of verifiability you want as the unboxing condition.
“You can specify whatever standard of verifiability you want” is vague. You can say “I want to be absolutely right about whether it’s Friendly”, but you can’t have that unless you know what Friendly means, and are smart enough to specify a standard for checking on it.
If you could be sure you had a cooperative AGI which could just give you an FAI, I think you’d have basically solved the problem of creating an FAI.....but that’s the problem you’re trying to get the AGI to solve for you.