Doesn’t this totally kill the idea of resolving ASP by having the agent prove something about its own actions in less than N steps (for N<<L), that got posted last week?
Well, if it’s possible for the predictor to prove (in N steps) that A()==1, then it’s also possible for the agent to prove A()==1 in N steps, and so A()=/= 2 in fewer than L steps, and so the agent returns 2. So there is no proof that A()==1 that’s N steps, so you can’t get the million dollars.
In my formulation of ASP, the predictor has a stronger formal system than the agent, so some statements can have short proofs in the predictor’s formal system but only long proofs in the agent’s formal system. An extreme example is consistency of the agent’s formal system, which is unprovable by the agent but an axiom for the predictor.
Hm. So what would you still need to win. You’d still need to prove that A()==1 implies U()==10^6. But if that’s true, the only way the agent doesn’t prove that A()==2 implies U()==10^6+10^3 is if the predictor is really always right. But that’s not provable to the agent, because I’m pretty sure that relies on the consistency of the agent’s formal system.
The ASP post already has a rigorous proof that this implementation of A will fail. But maybe we can find a different algorithm for A that would search for proofs of some other form, and succeed?
Doesn’t this totally kill the idea of resolving ASP by having the agent prove something about its own actions in less than N steps (for N<<L), that got posted last week?
I’m not sure. Can you expand? Also, the followup post seems a little more relevant to ASP.
Well, if it’s possible for the predictor to prove (in N steps) that A()==1, then it’s also possible for the agent to prove A()==1 in N steps, and so A()=/= 2 in fewer than L steps, and so the agent returns 2. So there is no proof that A()==1 that’s N steps, so you can’t get the million dollars.
In my formulation of ASP, the predictor has a stronger formal system than the agent, so some statements can have short proofs in the predictor’s formal system but only long proofs in the agent’s formal system. An extreme example is consistency of the agent’s formal system, which is unprovable by the agent but an axiom for the predictor.
Hm. So what would you still need to win. You’d still need to prove that A()==1 implies U()==10^6. But if that’s true, the only way the agent doesn’t prove that A()==2 implies U()==10^6+10^3 is if the predictor is really always right. But that’s not provable to the agent, because I’m pretty sure that relies on the consistency of the agent’s formal system.
The ASP post already has a rigorous proof that this implementation of A will fail. But maybe we can find a different algorithm for A that would search for proofs of some other form, and succeed?