An intuitively compelling criterion is: these precise beliefs (which you are representable as holding) are within the bounds of your imprecise credences.
I think this is the step I reject. By hypothesis, I don’t think the coherence arguments show that the precise distribution P that I can be represented as optimizing w.r.t. corresponds to (reasonable) beliefs. P is nothing more than a mathematical device for representing some structure of behavior. So I’m not sure why I should require that my representor — i.e., the set of probability distributions that would be no less reasonable than each other if adopted as beliefs[1] — contains P.
I think I maybe figured out how to show that P must be in the representor.
You ought to assign non-0 probability that you’ll be asked to bet on arbitrary questions. In order to not have your policy be dominated, dynamic maximality will require that you commit in advance to the odds that you’d be on (after seeing arbitrary evidence). Clearly you should be on odds P. And it’s only permissible to bet at odds that are inside your representor.
(Now strictly speaking, there are some nuances about what kind of questions you can be convinced that you’ll be betting on, given that some of them might be quite hard to measure/verify even post-hoc. But since we’re just talking about non-0 probability of being convinced that you’re really betting on a question, I don’t think this should be too restrictive. And even non-”pure” bets, that only indirectly gets at some question q, will contribute to forcing P’s belief in q inside of your representor, I think.)
Sorry, I don’t understand the argument yet. Why is it clear that I should bet on odds P, e.g., if P is the distribution that the CCT says I should be represented by?
Because you couldn’t be represented as being an EV-maximizer with beliefs P if you were betting using some odds other than P. Because that would lead to lower expected value. (Assuming that pay-offs are going to be proportional to some proper scoring rule.)
“It’s only permissible to bet at odds that are inside your representor” is only true if the representor is convex. If my credence in some proposition X is, say, P(X) = (0.2, 0.49) U (0.51, 0.7), IIUC it’s permissible to bet at 0.5. I guess the claim that’s true is “It’s only permissible to bet at odds in the convex hull of your representor”.
But I’m not aware of an argument that representors should be convex in general.
If there is such an argument, my guess is that the way things would work is: We start with the non-convex set of distributions that seem no less reasonable than each other, and then add in whichever other distributions are needed to make it convex. But there would be no particular reason we’d need to interpret these other distributions as “reasonable” precise beliefs, relative to the distributions in the non-convex set we started with.
And, the kind of precise distribution P that would rationalize e.g. working on shrimp welfare seems to be the analogue of “betting at 0.5” in my example above. That is:
Our actual “set of distributions that seem no less reasonable than each other” would include some distributions that imply large positive long-term EV from working on shrimp welfare, and some that imply large negative long-term EV.
Whereas the distributions like P that imply vanishingly small long-term EV — given any evidence too weak to resolve our cluelessness w.r.t. long-term welfare — would lie in the convex hull. So betting at odds P would be permissible, and yet this wouldn’t imply that P is “reasonable” as precise beliefs.
Thanks for explaining!
I think this is the step I reject. By hypothesis, I don’t think the coherence arguments show that the precise distribution P that I can be represented as optimizing w.r.t. corresponds to (reasonable) beliefs. P is nothing more than a mathematical device for representing some structure of behavior. So I’m not sure why I should require that my representor — i.e., the set of probability distributions that would be no less reasonable than each other if adopted as beliefs[1] — contains P.
I’m not necessarily committed to this interpretation of the representor, but for the purposes of this discussion I think it’s sufficient.
I think I maybe figured out how to show that P must be in the representor.
You ought to assign non-0 probability that you’ll be asked to bet on arbitrary questions. In order to not have your policy be dominated, dynamic maximality will require that you commit in advance to the odds that you’d be on (after seeing arbitrary evidence). Clearly you should be on odds P. And it’s only permissible to bet at odds that are inside your representor.
(Now strictly speaking, there are some nuances about what kind of questions you can be convinced that you’ll be betting on, given that some of them might be quite hard to measure/verify even post-hoc. But since we’re just talking about non-0 probability of being convinced that you’re really betting on a question, I don’t think this should be too restrictive. And even non-”pure” bets, that only indirectly gets at some question q, will contribute to forcing P’s belief in q inside of your representor, I think.)
Sorry, I don’t understand the argument yet. Why is it clear that I should bet on odds P, e.g., if P is the distribution that the CCT says I should be represented by?
Because you couldn’t be represented as being an EV-maximizer with beliefs P if you were betting using some odds other than P. Because that would lead to lower expected value. (Assuming that pay-offs are going to be proportional to some proper scoring rule.)
Oops, right. I think what’s going on is:
“It’s only permissible to bet at odds that are inside your representor” is only true if the representor is convex. If my credence in some proposition X is, say, P(X) = (0.2, 0.49) U (0.51, 0.7), IIUC it’s permissible to bet at 0.5. I guess the claim that’s true is “It’s only permissible to bet at odds in the convex hull of your representor”.
But I’m not aware of an argument that representors should be convex in general.
If there is such an argument, my guess is that the way things would work is: We start with the non-convex set of distributions that seem no less reasonable than each other, and then add in whichever other distributions are needed to make it convex. But there would be no particular reason we’d need to interpret these other distributions as “reasonable” precise beliefs, relative to the distributions in the non-convex set we started with.
And, the kind of precise distribution P that would rationalize e.g. working on shrimp welfare seems to be the analogue of “betting at 0.5” in my example above. That is:
Our actual “set of distributions that seem no less reasonable than each other” would include some distributions that imply large positive long-term EV from working on shrimp welfare, and some that imply large negative long-term EV.
Whereas the distributions like P that imply vanishingly small long-term EV — given any evidence too weak to resolve our cluelessness w.r.t. long-term welfare — would lie in the convex hull. So betting at odds P would be permissible, and yet this wouldn’t imply that P is “reasonable” as precise beliefs.