Is it reasonable to assign P(X) = P(will be proven(X)) / (P(will be proven(X)) + P(will be disproven(X))) ?
No. Consider Goldbach’s conjecture: any even number above four is the sum of two odd primes. If it’s false, it can be disproved via counterexample. If it’s true, the proof must be much more sophisticated. At this point, we might be able to say that if it were feasible to disprove via counterexample that would have already happened, but that certainly wasn’t always the case.
Goldbach’s conjecture is “Every even integer above four is the sum of two primes,” surely?
Also, Gödel’s incompleteness theorem states that there are theorems which are true but non-provable, so you get something like:
P(X) = (P(will be proven(X)) + P(is true but unprovable(X))) / (P(will be proven(X)) + P(will be disproven(X)) + P(is true but unprovable(X)))
Is there a reason to suspect that a counterexample wouldn’t be a very large number that hasn’t been considered yet? Consider sublime numbers: the first (12) is a number which will be checked by any search process, but there is another which has 76 digits and, I would suspect, could be missed by some searches.
P(X) = (P(will be proven(X)) + P(is true but unprovable(X))) / (P(will be proven(X)) + P(will be disproven(X)) + P(is true but unprovable(X)))
It still only works on assumptions that might generally make good approximations, but aren’t necessarily true.
More to the point, Phil was suggesting that something is false on the basis that it would be difficult to prove if true, but easy to prove if false. In other words, he was using it specifically because that example was one where the implicit assumption in his approximation, that the relative values of P(will be proven(X)) to P(will be disproven(X)) are about the same as P(is true but will not be proven(X)) to P(is false but will not be disproven(X)), was particularly far from the truth.
Is there a reason to suspect that a counterexample wouldn’t be a very large number that hasn’t been considered yet?
Suppose they’ll check every number up to n. Suppose also that we’re using a logarithmic prior as to where the first counterexample is. Suppose also we’ll eventually find it. It has the same probability of being from one to sqrt(n) as from sqrt(n) to n. This means that if m, the number of numbers we’ve already checked, is more than sqrt(n), we’ve probably already found it. Since m is pretty big, we’re probably not going to manage to check as many numbers as we’ve already checked m times over.
Looking into it more, they’ve been using more clever ways to prove that it’s true up to certain numbers, and we know it’s true for up to about 4*10^18. It would be impossible to even check a number that size without some kind of clever method, let alone check enough to find the counter-example.
No. Consider Goldbach’s conjecture: any even number above four is the sum of two odd primes. If it’s false, it can be disproved via counterexample. If it’s true, the proof must be much more sophisticated. At this point, we might be able to say that if it were feasible to disprove via counterexample that would have already happened, but that certainly wasn’t always the case.
Edit: Fixed Goldbach’s conjecture.
Now that would be an impressive feat...
Goldbach’s conjecture is “Every even integer above four is the sum of two primes,” surely?
Also, Gödel’s incompleteness theorem states that there are theorems which are true but non-provable, so you get something like:
P(X) = (P(will be proven(X)) + P(is true but unprovable(X))) / (P(will be proven(X)) + P(will be disproven(X)) + P(is true but unprovable(X)))
Is there a reason to suspect that a counterexample wouldn’t be a very large number that hasn’t been considered yet? Consider sublime numbers: the first (12) is a number which will be checked by any search process, but there is another which has 76 digits and, I would suspect, could be missed by some searches.
Fixed. Thanks.
It still only works on assumptions that might generally make good approximations, but aren’t necessarily true.
More to the point, Phil was suggesting that something is false on the basis that it would be difficult to prove if true, but easy to prove if false. In other words, he was using it specifically because that example was one where the implicit assumption in his approximation, that the relative values of P(will be proven(X)) to P(will be disproven(X)) are about the same as P(is true but will not be proven(X)) to P(is false but will not be disproven(X)), was particularly far from the truth.
Suppose they’ll check every number up to n. Suppose also that we’re using a logarithmic prior as to where the first counterexample is. Suppose also we’ll eventually find it. It has the same probability of being from one to sqrt(n) as from sqrt(n) to n. This means that if m, the number of numbers we’ve already checked, is more than sqrt(n), we’ve probably already found it. Since m is pretty big, we’re probably not going to manage to check as many numbers as we’ve already checked m times over.
Looking into it more, they’ve been using more clever ways to prove that it’s true up to certain numbers, and we know it’s true for up to about 4*10^18. It would be impossible to even check a number that size without some kind of clever method, let alone check enough to find the counter-example.