This sounds like PA is not actually the logic you’re using.
Maybe this is the confusion. I’m not using PA. I’m assuming (well, provisionally assuming) PA is consistent.
If PA is consistent, then an agent using PA believes the world is consistent—in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.
(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent “respects logic” in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it “respects logic” in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can’t have both. The two notions of “respect logic” are equivalent if the underlying logic is consistent, but not otherwise.)
But such an agent doesn’t have much to say for itself anyway, so it’s more interesting to focus on what the consistent agent has to say for itself.
And I think the consistent agent very much does not “hold open the possibility” that the world is inconsistent. It actively denies this.
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
Theres two ways to express “PA is consistent”. The first is ∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of “the world is consistent” (indeed, this “world” is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.
Maybe this is the confusion. I’m not using PA. I’m assuming (well, provisionally assuming) PA is consistent.
If PA is consistent, then an agent using PA believes the world is consistent—in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.
(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent “respects logic” in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it “respects logic” in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can’t have both. The two notions of “respect logic” are equivalent if the underlying logic is consistent, but not otherwise.)
But such an agent doesn’t have much to say for itself anyway, so it’s more interesting to focus on what the consistent agent has to say for itself.
And I think the consistent agent very much does not “hold open the possibility” that the world is inconsistent. It actively denies this.
Theres two ways to express “PA is consistent”. The first is ∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of “the world is consistent” (indeed, this “world” is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.