I think the problem here is that you do not quite understand the problem.
It’s not that we “imagine that we’ve imagined the whole world, do not notice any contradictions and call it a day”. It’s that we know there exists idealized procedure which doesn’t produce stupid answers, like, it can’t be money-pumped. We also know that if we try to approximate this procedure harder (consider more hypotheses, compute more inferences) we are going to get in expectation better results. It is not, say, property of null hypothesis testing—the more hypotheses you consider, the more likely you to either p-hack or drive p-value into statistical insignificance due to excessive multiple testing correction.
The whole computationally unbounded Bayesian business is more about “here is an idealized procedure X, and if we don’t do anything visibly for us stupid from perspective of X, then we can hope that our losses won’t be unbounded from certain notion of boundedness”. It is not obvious that your procedure can be understood this way.
When you are conditioning on empirical fact, you are imaging set of logically consistent worlds where this empirical fact is true and ask yourself about frequency of other empirical facts inside this set.
How do you know which worlds are logically consistent with your observations and which are not? For that you need to hold them in your mind one by one with all their details and checks for inconsistencies. Which requires you to be a logically omniscient supercomputator with unlimited memory. And none of us is that.
So you have to be doing something else. Only validate the consistency to the best of your cognitive resources, therefore—“imagine that we’ve imagined the whole world, do not notice any contradictions and call it a day”.
It’s that we know there exists idealized procedure which doesn’t produce stupid answers, like, it can’t be money-pumped.
Well, yes. That’s the goal. What I’m doing is trying to pinpoint this procedure without the framework of possible worlds which, among other things, doesn’t allow reasoning about logical uncertainty. I replace it with a better framework, of iterations of probability experiment that does allow that.
The whole computationally unbounded Bayesian business is more about “here is an idealized procedure X, and if we don’t do anything visibly for us stupid from perspective of X, then we can hope that our losses won’t be unbounded from certain notion of boundedness”. It is not obvious that your procedure can be understood this way.
The bayesian procedure is the same, we’ve just got rid of all the bizarre metaphysics and now explicitly talking about values of a function approximating something in the real world. What is not obvious for you here? Do you expect that there is some case in which my framework fails, where framework of possible worlds doesn’t? If so I’d like to see this example. But I’m also currious where such belief would even come from, considering that, once again, we simply talk about iterations of probability experiment instead of possible worlds.
I think the problem here is that you do not quite understand the problem.
It’s not that we “imagine that we’ve imagined the whole world, do not notice any contradictions and call it a day”. It’s that we know there exists idealized procedure which doesn’t produce stupid answers, like, it can’t be money-pumped. We also know that if we try to approximate this procedure harder (consider more hypotheses, compute more inferences) we are going to get in expectation better results. It is not, say, property of null hypothesis testing—the more hypotheses you consider, the more likely you to either p-hack or drive p-value into statistical insignificance due to excessive multiple testing correction.
The whole computationally unbounded Bayesian business is more about “here is an idealized procedure X, and if we don’t do anything visibly for us stupid from perspective of X, then we can hope that our losses won’t be unbounded from certain notion of boundedness”. It is not obvious that your procedure can be understood this way.
There is definetely some kind of misunderstanding that is going on, and I’d like to figure it out.
How it’s not the case? Citing you from here:
How do you know which worlds are logically consistent with your observations and which are not? For that you need to hold them in your mind one by one with all their details and checks for inconsistencies. Which requires you to be a logically omniscient supercomputator with unlimited memory. And none of us is that.
So you have to be doing something else. Only validate the consistency to the best of your cognitive resources, therefore—“imagine that we’ve imagined the whole world, do not notice any contradictions and call it a day”.
Well, yes. That’s the goal. What I’m doing is trying to pinpoint this procedure without the framework of possible worlds which, among other things, doesn’t allow reasoning about logical uncertainty. I replace it with a better framework, of iterations of probability experiment that does allow that.
The bayesian procedure is the same, we’ve just got rid of all the bizarre metaphysics and now explicitly talking about values of a function approximating something in the real world. What is not obvious for you here? Do you expect that there is some case in which my framework fails, where framework of possible worlds doesn’t? If so I’d like to see this example. But I’m also currious where such belief would even come from, considering that, once again, we simply talk about iterations of probability experiment instead of possible worlds.