I also put 0 for MWI, although I feel pretty good about that. (For reasons explained in this comment, a 0 means that my answer is less than 0.5%.)
I am the kind of Bayesian who strictly speaking only speaks of probabilities of potentially observable events. (This is a kind of logical-positivist Bayesianism, I guess.) It doesn’t do to be too strict about this sort of thing (I don’t want to just wall off entire subjects as unspeakable, which is the classic failure mode of logical positivism), but it does mean that I have to think about what other statements really mean in practical terms.
So I interpreted this to mean, assuming that I learn much much more about the nature of the world than I know now, would I think that the MWI is a useful way for people today to think about things? (That’s pretty much how I always interpret questions about interpretations.) And no matter how much learning I contemplate, the log-odds are never as good as 8 bits against, so that’s a 0.
I also put 0 for MWI, although I feel pretty good about that. (For reasons explained in this comment, a 0 means that my answer is less than 0.5%.)
I am the kind of Bayesian who strictly speaking only speaks of probabilities of potentially observable events. (This is a kind of logical-positivist Bayesianism, I guess.) It doesn’t do to be too strict about this sort of thing (I don’t want to just wall off entire subjects as unspeakable, which is the classic failure mode of logical positivism), but it does mean that I have to think about what other statements really mean in practical terms.
So I interpreted this to mean, assuming that I learn much much more about the nature of the world than I know now, would I think that the MWI is a useful way for people today to think about things? (That’s pretty much how I always interpret questions about interpretations.) And no matter how much learning I contemplate, the log-odds are never as good as 8 bits against, so that’s a 0.