The decision procedure you outlined in the first example seems equivalent to an evidential decision theorist placing 0 credence on worlds where Omega makes an incorrect prediction. What is the infra-bayesianism framework doing differently? It just looks like the credence distribution over worlds is disguised by the ‘Nirvana trick.’
In Newcomb’s problem, this is correct, it lines up exactly like an EDT agent. In other scenarios, we get different behavior, e.g. in the situation of counterfactual mugging. In this case, the UDT agent will pay, so that it maximizes the overall expected utility, even after seeing the coin flips tails and Omega asks you to pay. An EDT agent, on the other hand, won’t pay here, because the expected utility of paying (-100) is worse than not paying (0). The key distinction is that EDT is an updateful decision theory—it doesn’t reason about the other branches of the universe that have already been ruled out by observed evidence.
We also don’t have a credence distribution over worlds, because this would be too large to hold in our heads. Instead of a credence distribution, we just have a set of possible worlds.
In the decision rule, how is the set of environments ‘E’ determined? If it contains every possible environment, then this means I should behave like I am in the worst possible world, which would cause me to do some crazy things.
The environment set E corresponds accounts for each possible policy the agent. So for each policy πi, there is a corresponding environment ei where that policy is hardcoded. We want our agent to just reasons over the diagonal of the matrices I printed, i.e., over pairs (ei,πi) where the environmental policy matches the taken policy.
Also, when you say that an infra-bayesian agent models the world with a set of probability distributions, what does this mean? Does the set contain every distribution that would be consistent with the agent’s observations? But isn’t this almost all probability distributions?
So how it actually works is that you have a collection of hypotheses Θi, each with a probability pi attached. In Bayesianism, each Θi would simply be a distribution over the world. In infra-bayesianism, each Θi is a set of affine-measures, which are just probability distributions with measure ≤1 as opposed to exactly 1, and they have an affine term, which tracks off branch expected utility.
This set does contain that, and does contain almost all probability distributions (I think). I think that it has to be this way, because of non-realizability, there is no way to rule out those distributions.
Sorry if I am missing something obvious. I guess this would have been clearer for me if you explained the infra-bayesian framework a little more before introducing the decision rule.
You aren’t missing obvious things afaict, the general framework is genuinely very complicated, and so the goal of this post was to give motivation for the basic ideas. The sequence puts the framework first.
The decision procedure you outlined in the first example seems equivalent to an evidential decision theorist placing 0 credence on worlds where Omega makes an incorrect prediction. What is the infra-bayesianism framework doing differently? It just looks like the credence distribution over worlds is disguised by the ‘Nirvana trick.’
In Newcomb’s problem, this is correct, it lines up exactly like an EDT agent. In other scenarios, we get different behavior, e.g. in the situation of counterfactual mugging. In this case, the UDT agent will pay, so that it maximizes the overall expected utility, even after seeing the coin flips tails and Omega asks you to pay. An EDT agent, on the other hand, won’t pay here, because the expected utility of paying (-100) is worse than not paying (0). The key distinction is that EDT is an updateful decision theory—it doesn’t reason about the other branches of the universe that have already been ruled out by observed evidence.
We also don’t have a credence distribution over worlds, because this would be too large to hold in our heads. Instead of a credence distribution, we just have a set of possible worlds.
In the decision rule, how is the set of environments ‘E’ determined? If it contains every possible environment, then this means I should behave like I am in the worst possible world, which would cause me to do some crazy things.
The environment set E corresponds accounts for each possible policy the agent. So for each policy πi, there is a corresponding environment ei where that policy is hardcoded. We want our agent to just reasons over the diagonal of the matrices I printed, i.e., over pairs (ei,πi) where the environmental policy matches the taken policy.
Also, when you say that an infra-bayesian agent models the world with a set of probability distributions, what does this mean? Does the set contain every distribution that would be consistent with the agent’s observations? But isn’t this almost all probability distributions?
So how it actually works is that you have a collection of hypotheses Θi, each with a probability pi attached. In Bayesianism, each Θi would simply be a distribution over the world. In infra-bayesianism, each Θi is a set of affine-measures, which are just probability distributions with measure ≤1 as opposed to exactly 1, and they have an affine term, which tracks off branch expected utility.
This set does contain that, and does contain almost all probability distributions (I think). I think that it has to be this way, because of non-realizability, there is no way to rule out those distributions.
Sorry if I am missing something obvious. I guess this would have been clearer for me if you explained the infra-bayesian framework a little more before introducing the decision rule.
You aren’t missing obvious things afaict, the general framework is genuinely very complicated, and so the goal of this post was to give motivation for the basic ideas. The sequence puts the framework first.