I think it’s possible to coordinate without the huge computational expense, if the programs would directly provide their proofs to each other. Then each of them would only need to check a proof, not find it.
The input to the programs would be a pair (opponent’s source, opponent’s proof), and the output would be the decision: cooperate or defect.
The algorithm for A would be:
Check the B’s proof, which must prove that B would cooperate if the proof it gets in its input checks out.
If the proof checks out—cooperate, otherwise—defect.
If A needs to give B the proof of some theorem that involves both A and B (as in the post), how does it obtain such a proof quickly without knowing the source code of B in advance? And if it’s a proof of some theorem involving only A, then what is that theorem and how does the whole setup work?
Let ChecksOut(code, proof, X) = true iff proof is a proof that code is a function of two parameters PCode and PProof, which returns “C” if ChecksOut(PCode, PProof).
The definition of ChecksOut is kinda circular, but it should be fixable with some kind of diagonalization. Like:
SeedChecksOut(code, proof, X) = true iff proof is a proof that code is a function of two parameters PCode and PProof, which returns “C” if eval(X(PCode, PProof)),
Hmm, that’s pretty cool, thanks! But it’s still not obvious to me that it solves the same problem as the algorithm in the post. There are probably many mutually incompatible implementations of ChecksOut, because there are many ways to diagonalize that differ only syntactically. Do you have an argument why they will end up cooperating with each other? It seems to me that they won’t, just like different implementations of quining cooperation.
Hmm, yes, you’re right, it’s quining cooperation all over again—as defined, the programs would only cooperate with opponents that are sufficiently similar syntactically. And they need compatible proof systems in any case. So it’s not a solution to the same problem, only a possible optimization.
I think it’s possible to coordinate without the huge computational expense, if the programs would directly provide their proofs to each other. Then each of them would only need to check a proof, not find it.
The input to the programs would be a pair (opponent’s source, opponent’s proof), and the output would be the decision: cooperate or defect.
The algorithm for A would be: Check the B’s proof, which must prove that B would cooperate if the proof it gets in its input checks out. If the proof checks out—cooperate, otherwise—defect.
If A needs to give B the proof of some theorem that involves both A and B (as in the post), how does it obtain such a proof quickly without knowing the source code of B in advance? And if it’s a proof of some theorem involving only A, then what is that theorem and how does the whole setup work?
Let ChecksOut(code, proof, X) = true iff proof is a proof that code is a function of two parameters PCode and PProof, which returns “C” if ChecksOut(PCode, PProof).
Def A(code, proof): if(ChecksOut(code, proof)) return “C”, otherwise return “D”.
The definition of ChecksOut is kinda circular, but it should be fixable with some kind of diagonalization. Like:
SeedChecksOut(code, proof, X) = true iff proof is a proof that code is a function of two parameters PCode and PProof, which returns “C” if eval(X(PCode, PProof)),
ChecksOut(code, proof) = SeedChecksOut(code, proof, #SeedChecksOut)
Hmm, that’s pretty cool, thanks! But it’s still not obvious to me that it solves the same problem as the algorithm in the post. There are probably many mutually incompatible implementations of ChecksOut, because there are many ways to diagonalize that differ only syntactically. Do you have an argument why they will end up cooperating with each other? It seems to me that they won’t, just like different implementations of quining cooperation.
Hmm, yes, you’re right, it’s quining cooperation all over again—as defined, the programs would only cooperate with opponents that are sufficiently similar syntactically. And they need compatible proof systems in any case. So it’s not a solution to the same problem, only a possible optimization.