For example, if our universe does in fact contain halting problem oracles, the Bayesian superintelligence with a TM-based universal prior will never be able to believe that.
I think this problem would vanish if you spelled out what “believe” means. The Bayesian superintelligence would quickly learn to trust the opinion of the halting problem oracle; therefore, it would “believe” it.
I am having a few problems in thinking of a sensible definition of “believe” in which the superintelligence would fail to believe what its evidence tells it is true. It would be especially obvious if the machine was very small. The superintelligence would just use Occcam’s razor—and figure it out.
Of course, one could imagine a particularly stupid agent, that was too daft to do this—but then it would hardly be very much of a superintelligence.
I think this problem would vanish if you spelled out what “believe” means. The Bayesian superintelligence would quickly learn to trust the opinion of the halting problem oracle; therefore, it would “believe” it.
I am having a few problems in thinking of a sensible definition of “believe” in which the superintelligence would fail to believe what its evidence tells it is true. It would be especially obvious if the machine was very small. The superintelligence would just use Occcam’s razor—and figure it out.
Of course, one could imagine a particularly stupid agent, that was too daft to do this—but then it would hardly be very much of a superintelligence.