Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) ?
It’s also possible that X will never be either proven or disproven.
For a proposed proof, now that proofs can be checked by machines, it does make sense to expect the proof to be validated or invalidated.
But if one could prove that the many worlds interpretation of quantum mechanics is correct, that would constitute a disproof of X.
I’m not sure I follow. But then, I don’t know of any definition of free will that isn’t self-contradictory, so it’s not surprising that I don’t understand what would convince those who seriously consider free will.
It’s also possible that X will never be either proven or disproven.
For a proposed proof, now that proofs can be checked by machines, it does make sense to expect the proof to be validated or invalidated.
I’m not sure I follow. But then, I don’t know of any definition of free will that isn’t self-contradictory, so it’s not surprising that I don’t understand what would convince those who seriously consider free will.