None of these are worth anything unless you look for testable predictions, not for simply explaining the existing data. The problem with the UFO explanation is not that it has a fantastically low Solomonoff prior, but that it predicts nothing that would differentiate it from an explanation with a better prior.
In that vein, the Pascal’s Goldpan with a very specific utility function is one way to go: construct (independent) testable predictions and estimate the a priori probability of each one, then add (negative logs of) the probabilities for each model, and call it the utility function. Basically, the more predictions and the unlikelier they are, the higher the utility of a given model. Then test the predictions. Among the models where every prediction is confirmed, pick one with the highest utility.
I’m sure this approach has a name, but google failed me...
None of these are worth anything unless you look for testable predictions, not for simply explaining the existing data. The problem with the UFO explanation is not that it has a fantastically low Solomonoff prior, but that it predicts nothing that would differentiate it from an explanation with a better prior.
In that vein, the Pascal’s Goldpan with a very specific utility function is one way to go: construct (independent) testable predictions and estimate the a priori probability of each one, then add (negative logs of) the probabilities for each model, and call it the utility function. Basically, the more predictions and the unlikelier they are, the higher the utility of a given model. Then test the predictions. Among the models where every prediction is confirmed, pick one with the highest utility.
I’m sure this approach has a name, but google failed me...