In both cases, it can block the Lobian proofs. But something about this is unsatisfying about making ad-hoc adjustments to one’s policy like this. I’ll quote Demski on this instead of trying to write my own explanation. Demski writes
Secondly, an agent could reason logically but with some looseness. This can fortuitously block the Troll Bridge proof. However, the approach seems worryingly unprincipled, because we can “improve” the epistemics by tightening the relationship to logic, and get a decision-theoretically much worse result.
The problem here is that we have some epistemic principles which suggest tightening up is good (it’s free money; the looser relationship doesn’t lose much, but it’s a dead-weight loss), and no epistemic principles pointing the other way. So it feels like an unprincipled exception: “being less dutch-bookable is generally better, but hang loose in this one case, would you?”
Naturally, this approach is still very interesting, and could be pursued further—especially if we could give a more principled reason to keep the observance of logic loose in this particular case. But this isn’t the direction this document will propose. (Although you could think of the proposals here as giving more principled reasons to let the relationship with logic be loose, sort of.)
So here, we will be interested in solutions which “solve troll bridge” in the stronger sense of getting it right while fully respecting logic. IE, updating to probability 1 (/0) when something is proven (/refuted).
feels to me like the problems with Lob’s theorem come from treating worldly cause and effect as logical implication above the proof system, which is just silly. (i’m no expert
That certainly is one of the problems. Even if the world actually is a formal system in some sense, how can an agent prove that it is?
This is beside this whole class of agents just being inconsistent: formal systems that can generally prove Prov(P) → P (as is assumed here for the use of Löb’s theorem) are known by Gödel’s 2nd incompleteness theorem to be inconsistent.
I guess I disagree, in that even if the agent intentionally implements a non-loose policy, it won’t subjectively believe that it will act exactly non-loosely because some random thing could go wrong.
to what extent are the problems with Lob’s theorem solved by “proof only implies agent takes action with 99.9% probability, bc agent is fallable”?
In both cases, it can block the Lobian proofs. But something about this is unsatisfying about making ad-hoc adjustments to one’s policy like this. I’ll quote Demski on this instead of trying to write my own explanation. Demski writes
feels to me like the problems with Lob’s theorem come from treating worldly cause and effect as logical implication above the proof system, which is just silly. (i’m no expert
That certainly is one of the problems. Even if the world actually is a formal system in some sense, how can an agent prove that it is?
This is beside this whole class of agents just being inconsistent: formal systems that can generally prove Prov(P) → P (as is assumed here for the use of Löb’s theorem) are known by Gödel’s 2nd incompleteness theorem to be inconsistent.
Don’t know what part of the post you’re referring to.
I guess I disagree, in that even if the agent intentionally implements a non-loose policy, it won’t subjectively believe that it will act exactly non-loosely because some random thing could go wrong.