In essence, money change hands only if expected utilities converge.
Great when someone tries to Pascal mug you. Not so great when you’re just trying to buy groceries, and you can’t prove that they’re not a Pascal mugger. The expected utilities don’t just diverge when some tries to take advantage of you. They always diverge.
Also, not making a decision is itself a decision, so there’s no more reason for money to not change hands than there is for it to change hands, but since you’re not actually acting against your decision theory, that problem isn’t that bad.
Well, de facto they always converge, mugging or not, and I’m not going to take as normative a formalism where they do diverge. edit: e.g. instead I can adopt speed prior, it’s far less insane than incompetent people make it out to be—code size penalty for optimizing out the unseen is very significant. Or if I don’t like speed prior (and other such “solutions”), I can simply be sane and conclude that we don’t have a working formalism. Prescriptivism is silly when it is unclear how to decide efficiently under bounded computing power.
I can simply be sane and conclude that we don’t have a working formalism.
That’s generally what you do when you find a paradox that you can’t solve. I’m not suggesting that you actually conclude that you can’t make a decision.
Of course. And on the practical level, if I want other agents to provide me with more accurate information (something that has high utility scaled by all potential unlikely scenarios), I must try to make production of falsehoods non-profitable.
Great when someone tries to Pascal mug you. Not so great when you’re just trying to buy groceries, and you can’t prove that they’re not a Pascal mugger. The expected utilities don’t just diverge when some tries to take advantage of you. They always diverge.
Also, not making a decision is itself a decision, so there’s no more reason for money to not change hands than there is for it to change hands, but since you’re not actually acting against your decision theory, that problem isn’t that bad.
Well, de facto they always converge, mugging or not, and I’m not going to take as normative a formalism where they do diverge. edit: e.g. instead I can adopt speed prior, it’s far less insane than incompetent people make it out to be—code size penalty for optimizing out the unseen is very significant. Or if I don’t like speed prior (and other such “solutions”), I can simply be sane and conclude that we don’t have a working formalism. Prescriptivism is silly when it is unclear how to decide efficiently under bounded computing power.
That’s generally what you do when you find a paradox that you can’t solve. I’m not suggesting that you actually conclude that you can’t make a decision.
Of course. And on the practical level, if I want other agents to provide me with more accurate information (something that has high utility scaled by all potential unlikely scenarios), I must try to make production of falsehoods non-profitable.