I am, as usual, a bit confused. If you require a sentence to be consistent with (e.g.) PA before being added to Tn+1, this proposal is unable to assign nonzero probability to the trillionth digit of pi being 2 - and conditional on the trillionth digit of pi counterfactually being 2, it is unable to go on to believe in PA.
It seems like some looser condition for adding to the theory is needed. Not just as a sop to practicality, but to get at some important desiderata of logical counterfactuals.
Here’s the picture of logical counterfactuals that I’m currently thinking under:
People have some method for generating mental models of math, and these mental models have independencies that the ground truth mathematics doesn’t. E.g., when I imagine the trillionth digit of pi being 2, it doesn’t change (in the mental model) whether the Collatz conjecture is true. In fact, for typical scenarios I consider, I can continue to endorse (in my mental model) the typical properties of the real numbers, even when considering a collection of statements, most of which are inconsistent with those properties (like assigning a distribution over some digit of pi).
This apparent independence produces an apparent partial causal graph (within a certain family of mental models), which leads to the use of causal language like “Even if one set the trillionth digit of pi to 2, it would not change the things I’m taking for granted in my mental models, nor would it change the things that would not change in my mental model when I change the setting of this digit of pi.”
I am, as usual, a bit confused. If you require a sentence to be consistent with (e.g.) PA before being added to Tn+1, this proposal is unable to assign nonzero probability to the trillionth digit of pi being 2 - and conditional on the trillionth digit of pi counterfactually being 2, it is unable to go on to believe in PA.
It seems like some looser condition for adding to the theory is needed. Not just as a sop to practicality, but to get at some important desiderata of logical counterfactuals.
Here’s the picture of logical counterfactuals that I’m currently thinking under:
People have some method for generating mental models of math, and these mental models have independencies that the ground truth mathematics doesn’t. E.g., when I imagine the trillionth digit of pi being 2, it doesn’t change (in the mental model) whether the Collatz conjecture is true. In fact, for typical scenarios I consider, I can continue to endorse (in my mental model) the typical properties of the real numbers, even when considering a collection of statements, most of which are inconsistent with those properties (like assigning a distribution over some digit of pi).
This apparent independence produces an apparent partial causal graph (within a certain family of mental models), which leads to the use of causal language like “Even if one set the trillionth digit of pi to 2, it would not change the things I’m taking for granted in my mental models, nor would it change the things that would not change in my mental model when I change the setting of this digit of pi.”