You could use this algorithm in place of a theorem prover when controlling a constant program, to choose the action that maximizes the expected utility of the implicit conditional probability distribution.
That’s exactly what I did in my first post about UDT. Did you miss that part? (See the paragraph just before “Example 1: Counterfactual Mugging”.)
That said, there are fundamental seeming questions about UDT which this change might affect.
Definitely, a lot of issues in UDT seem to hinge on getting a better understanding of logical uncertainty, but I’m still not sure what “approachable technical details” we can try to solve now, as opposed to trying to better understand logical uncertainty in general.
For each statement, consider the maximal odds at which you would bet on it?
But there is no single odds at which I’m willing to bet on a particular statement. For one thing, I’m risk averse so the odds depends on the amount of the bet. Now if VNM axioms apply, then that risk aversion can be folded into a utility function and my preferences can be expressed as expected utility maximization, so it still makes sense to talk about probabilities. But it’s not clear that VNM axioms apply.
The “approachable technical details” I was imagining were of the form “What would an inference algorithm have to look like—how would it have to implicitly represent its probability distribution, and how inconsistent could that distribution be—in order for it to make sense to use it with UDT.” After having thought about it more, I realized these questions aren’t very extensive and basically boil down to the first example I gave.
Did you miss that part?
Yes. Moreover, I think I had never read your first post about UDT.
But it’s not clear that VNM axioms apply.
I don’t quite understand your objection, but probably because I am confused. What I imagine doing, very precisely, is this: using your preferences over outcomes and VNM, make a utility function, defined on outcomes. Using this utility function, offer wagers on mathematical statements (ie, offer bets of the form “You get outcome A; or, you get outcome B if statement X is true, and outcome C if statement X is false.”, where A, B, C have known utilities)
That’s exactly what I did in my first post about UDT. Did you miss that part? (See the paragraph just before “Example 1: Counterfactual Mugging”.)
Definitely, a lot of issues in UDT seem to hinge on getting a better understanding of logical uncertainty, but I’m still not sure what “approachable technical details” we can try to solve now, as opposed to trying to better understand logical uncertainty in general.
But there is no single odds at which I’m willing to bet on a particular statement. For one thing, I’m risk averse so the odds depends on the amount of the bet. Now if VNM axioms apply, then that risk aversion can be folded into a utility function and my preferences can be expressed as expected utility maximization, so it still makes sense to talk about probabilities. But it’s not clear that VNM axioms apply.
The “approachable technical details” I was imagining were of the form “What would an inference algorithm have to look like—how would it have to implicitly represent its probability distribution, and how inconsistent could that distribution be—in order for it to make sense to use it with UDT.” After having thought about it more, I realized these questions aren’t very extensive and basically boil down to the first example I gave.
Yes. Moreover, I think I had never read your first post about UDT.
I don’t quite understand your objection, but probably because I am confused. What I imagine doing, very precisely, is this: using your preferences over outcomes and VNM, make a utility function, defined on outcomes. Using this utility function, offer wagers on mathematical statements (ie, offer bets of the form “You get outcome A; or, you get outcome B if statement X is true, and outcome C if statement X is false.”, where A, B, C have known utilities)