@Toby: Why, yes, I was feeling rather grateful at that point that I hadn’t quantified the probability. It’s fair to assume that it would have been low enough that I couldn’t plausibly recover from the calibration hit, like 1% or something.
@Scott: This entire discussion assumes unlimited finite internal computing power, so P and NP cut no dice here. Otherwise, of course a larger environment can outsmart you mathematically.
@Will: Mathematical truths are about which axioms imply which theorems.
@Wei: A halting oracle is usually said to output 1s or 0s, not proofs or halting times, right?
Also @Wei: I don’t recall if I’ve mentioned this before, but Solomonoff induction in the mixture form makes no mention of the truth of its models. It just says that any computable probability distribution is in the mixture somewhere, so you can do as well as any computable form of cognitive uncertainty up to a constant.
In other words, if there’s any computable reaction that you have to discovering what looks like a black box halting solver—any computable reasoning that decides that “this looks like an uncomputable halting solver” and produces new distributions over computably related events as a result—then that’s in the Solomonoff mixture.
Solomonoff is not really as bad as it sounds.
But when it comes to making use of the results to incorporate the halting oracle via self-modification—then Solomonoff blows a fuse, of course, because it was never designed for self-modification in the first place; it’s a Cartesian formalism that puts the universe irrevocably on the outside.
@Toby: Why, yes, I was feeling rather grateful at that point that I hadn’t quantified the probability. It’s fair to assume that it would have been low enough that I couldn’t plausibly recover from the calibration hit, like 1% or something.
@Scott: This entire discussion assumes unlimited finite internal computing power, so P and NP cut no dice here. Otherwise, of course a larger environment can outsmart you mathematically.
@Will: Mathematical truths are about which axioms imply which theorems.
@Wei: A halting oracle is usually said to output 1s or 0s, not proofs or halting times, right?
Also @Wei: I don’t recall if I’ve mentioned this before, but Solomonoff induction in the mixture form makes no mention of the truth of its models. It just says that any computable probability distribution is in the mixture somewhere, so you can do as well as any computable form of cognitive uncertainty up to a constant.
In other words, if there’s any computable reaction that you have to discovering what looks like a black box halting solver—any computable reasoning that decides that “this looks like an uncomputable halting solver” and produces new distributions over computably related events as a result—then that’s in the Solomonoff mixture.
Solomonoff is not really as bad as it sounds.
But when it comes to making use of the results to incorporate the halting oracle via self-modification—then Solomonoff blows a fuse, of course, because it was never designed for self-modification in the first place; it’s a Cartesian formalism that puts the universe irrevocably on the outside.