Eliezer, I think I agree with most of what you say in this post, but unless I misunderstand what you mean by “Bayesian confirmation,” I think you’re wrong about this bit:
If the “boring view” of reality is correct, then you can never predict anything irreducible because you are reducible. You can never get Bayesian confirmation for a hypothesis of irreducibility, because any prediction you can make is, therefore, something that could also be predicted by a reducible thing, namely your brain.
I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior).
Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer—that performs operations on arbitrary real numbers, for example—but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much. If this scenario were true, would it follow that we cannot possibly obtain “Bayesian confirmation” of its truth? I don’t think that is the case: Of course, it is true that any empirical test our brains could devise in this scenario could also be passed by a Turing machine that simulated our brains to decide what its answer should be. In fact, every test “does the universe do X if we do Y at time T” we may devise to test whether the universe allows for infinite computations can be met by a Turing machine universe whose code simply includes the instruction to do X at time T. But, such a Turing machine may be complex enough that we start taking “the universe allows for hypercomputation” to be the simpler (and thus, more probable) alternative—unless we are willing to completely exclude that possibility a priori, which I’m not willing to do and I expect you aren’t, either.
Thus, I think that either your argument doesn’t support your conclusion, or I don’t understand your argument yet :-)
Eliezer, I think I agree with most of what you say in this post, but unless I misunderstand what you mean by “Bayesian confirmation,” I think you’re wrong about this bit:
I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior).
Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer—that performs operations on arbitrary real numbers, for example—but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much. If this scenario were true, would it follow that we cannot possibly obtain “Bayesian confirmation” of its truth? I don’t think that is the case: Of course, it is true that any empirical test our brains could devise in this scenario could also be passed by a Turing machine that simulated our brains to decide what its answer should be. In fact, every test “does the universe do X if we do Y at time T” we may devise to test whether the universe allows for infinite computations can be met by a Turing machine universe whose code simply includes the instruction to do X at time T. But, such a Turing machine may be complex enough that we start taking “the universe allows for hypercomputation” to be the simpler (and thus, more probable) alternative—unless we are willing to completely exclude that possibility a priori, which I’m not willing to do and I expect you aren’t, either.
Thus, I think that either your argument doesn’t support your conclusion, or I don’t understand your argument yet :-)