I would like to declare the following: I have submitted a program that cooperates only if you would cooperate against CooperateBot. You can of course create a selective defector against it, but that would be a bit tricky, as I am not revealing the source code. Evaluate your submissions accordingly.
But surely, good sir, common sense says that you should defect against CooperateBot in order to punish it for cooperating with DefectBot.
Also, in modal combat your bot is X=[]Y(CooperateBot) and is easily detected via the test [1](Y(X)<->[]X(CooperateBot)) where [1] denotes provability in T+Con(T), ie [1](Q) = []((~[]F)->Q).
The agent defined by wubbles is actually the agent called JustBot in the robust cooperation paper and which is proven to be non-exploitable by modal agents.
JusticeBot cooperates with anyone who cooperates with FairBot, and is exploitable by any agent which comprehends source code well enough to cooperate with FairBot and defect against JusticeBot. Though I’m going here off the remembered workshop rather than rechecking the paper.
I would like to declare the following: I have submitted a program that cooperates only if you would cooperate against CooperateBot. You can of course create a selective defector against it, but that would be a bit tricky, as I am not revealing the source code. Evaluate your submissions accordingly.
What wubbles actually submitted: A program that defects only if you would cooperate against CooperateBot.
Well played.
But surely, good sir, common sense says that you should defect against CooperateBot in order to punish it for cooperating with DefectBot.
Also, in modal combat your bot is X=[]Y(CooperateBot) and is easily detected via the test [1](Y(X)<->[]X(CooperateBot)) where [1] denotes provability in T+Con(T), ie [1](Q) = []((~[]F)->Q).
Am I missing something, or does your detector use quantification, which is disallowed in Patrick’s modal agents?
Hm. I think X within the test could be introduced as a new constant and solved, but I’m not sure.
The agent defined by wubbles is actually the agent called JustBot in the robust cooperation paper and which is proven to be non-exploitable by modal agents.
JusticeBot cooperates with anyone who cooperates with FairBot, and is exploitable by any agent which comprehends source code well enough to cooperate with FairBot and defect against JusticeBot. Though I’m going here off the remembered workshop rather than rechecking the paper.
You’re right, and wubbles’s agent can easily be exploited by a modal agent A defined by A(X)=C <-> [] (X(PB)=C) (where PB is PrudentBot).
I thought you said people should tolerate tolerance :)