The parents that you referred to are now at 17 and 22 points, which seems a bit mad to me. Spotting the errors in P&R’s reasoning isn’t really the problem. The problem is to come up with a general decision algorithm that both works (in the sense of making the right decisions) and (if possible) makes epistemic sense.
So far, we know that UDT works but it doesn’t compute or make use of “probability of being at X” so epistemically it doesn’t seem very satisfying. Does TDT give the right answer when applied to this problem? If so, how? (It’s not specified formally enough that I can just apply it mechanically.) Does this problem suggest any improvements or alternative algorithms?
Awesome. I’m steadily upgrading my expected utilities of handing decision-theory problems to Less Wrong.
Again, that seems to imply that the problem is solved, and I don’t quite see how the parent comments have done that.
I believe I’ve solved the problem. I’m going to include this in my next post on probability theory fundamentals but here is the gist of it.
The problem is to come up with a general decision algorithm that both works (in the sense of making the right decisions) and (if possible) makes epistemic sense.
The meta-problem here is that people were looking for the answer in the wrong place, searching for a different decision making algorithm while what we actually needed is a satisfying epistemological account. The core crux isn’t in decision theory, but on a previous step—in probability theory.
UDT works but it doesn’t compute or make use of “probability of being at X” so epistemically it doesn’t seem very satisfying.
That should be a clue that “probability of being at X” isn’t, in fact, a thing. That event “I’m at X and not at Y” is ill defined. In other words, that the problem is with our intuition that mistakenly assumes that there should be such an event, and with the lack of a strict epistemological framework that would allow us to answer questions such as “Does this mathematical model fit the setting?” and “Is this event well-defined?”
Here I provide this framework. An event is a conditional statement of a belief updating algorithm, that has to return clear True or False in every iteration of probability experiment, approximating some process to the best of our knowledge—in our case Absent-Minded Driver problem. Statement “I’m at X and not at Y” doesn’t satisfy this condition for Absent-Minded Driver as in some iterations of the experiment the driver will be at both. Therefore it’s not an event, and can not lead to conditionalization.
The event that is well defined in every iteration of the experiment is “I’m at X or Y”. This event has probability 1 which means trivial conditionalization—on its realization credences of the driver do not change. Therefore everything adds up to normality.
The problem is to come up with a general decision algorithm that both works (in the sense of making the right decisions) and (if possible) makes epistemic sense.
I presented a solution in a comment here which I think satisfies these: It gives the right answer and consistently handles the case of “partial knowledge” about one’s intersection, and correctly characterizes your epistemic condition in the absent-minded case.
I don’t see why the problem is not solved. The probability of being at X depends directly on how I am deciding whether to turn. So I cannot possibly use that probability to decide whether to turn; I need to decide on how I will turn first, and then I can calculate the probability of being at X. This results in the original solution.
This also shows that Eliezer was mistaken in claiming that any algorithm involving randomness can be improved by making it deterministic.
The parents that you referred to are now at 17 and 22 points, which seems a bit mad to me. Spotting the errors in P&R’s reasoning isn’t really the problem. The problem is to come up with a general decision algorithm that both works (in the sense of making the right decisions) and (if possible) makes epistemic sense.
So far, we know that UDT works but it doesn’t compute or make use of “probability of being at X” so epistemically it doesn’t seem very satisfying. Does TDT give the right answer when applied to this problem? If so, how? (It’s not specified formally enough that I can just apply it mechanically.) Does this problem suggest any improvements or alternative algorithms?
Again, that seems to imply that the problem is solved, and I don’t quite see how the parent comments have done that.
I believe I’ve solved the problem. I’m going to include this in my next post on probability theory fundamentals but here is the gist of it.
The meta-problem here is that people were looking for the answer in the wrong place, searching for a different decision making algorithm while what we actually needed is a satisfying epistemological account. The core crux isn’t in decision theory, but on a previous step—in probability theory.
That should be a clue that “probability of being at X” isn’t, in fact, a thing. That event “I’m at X and not at Y” is ill defined. In other words, that the problem is with our intuition that mistakenly assumes that there should be such an event, and with the lack of a strict epistemological framework that would allow us to answer questions such as “Does this mathematical model fit the setting?” and “Is this event well-defined?”
Here I provide this framework. An event is a conditional statement of a belief updating algorithm, that has to return clear True or False in every iteration of probability experiment, approximating some process to the best of our knowledge—in our case Absent-Minded Driver problem. Statement “I’m at X and not at Y” doesn’t satisfy this condition for Absent-Minded Driver as in some iterations of the experiment the driver will be at both. Therefore it’s not an event, and can not lead to conditionalization.
The event that is well defined in every iteration of the experiment is “I’m at X or Y”. This event has probability 1 which means trivial conditionalization—on its realization credences of the driver do not change. Therefore everything adds up to normality.
I presented a solution in a comment here which I think satisfies these: It gives the right answer and consistently handles the case of “partial knowledge” about one’s intersection, and correctly characterizes your epistemic condition in the absent-minded case.
I don’t see why the problem is not solved. The probability of being at X depends directly on how I am deciding whether to turn. So I cannot possibly use that probability to decide whether to turn; I need to decide on how I will turn first, and then I can calculate the probability of being at X. This results in the original solution.
This also shows that Eliezer was mistaken in claiming that any algorithm involving randomness can be improved by making it deterministic.