I believe I’ve solved the problem. I’m going to include this in my next post on probability theory fundamentals but here is the gist of it.
The problem is to come up with a general decision algorithm that both works (in the sense of making the right decisions) and (if possible) makes epistemic sense.
The meta-problem here is that people were looking for the answer in the wrong place, searching for a different decision making algorithm while what we actually needed is a satisfying epistemological account. The core crux isn’t in decision theory, but on a previous step—in probability theory.
UDT works but it doesn’t compute or make use of “probability of being at X” so epistemically it doesn’t seem very satisfying.
That should be a clue that “probability of being at X” isn’t, in fact, a thing. That event “I’m at X and not at Y” is ill defined. In other words, that the problem is with our intuition that mistakenly assumes that there should be such an event, and with the lack of a strict epistemological framework that would allow us to answer questions such as “Does this mathematical model fit the setting?” and “Is this event well-defined?”
Here I provide this framework. An event is a conditional statement of a belief updating algorithm, that has to return clear True or False in every iteration of probability experiment, approximating some process to the best of our knowledge—in our case Absent-Minded Driver problem. Statement “I’m at X and not at Y” doesn’t satisfy this condition for Absent-Minded Driver as in some iterations of the experiment the driver will be at both. Therefore it’s not an event, and can not lead to conditionalization.
The event that is well defined in every iteration of the experiment is “I’m at X or Y”. This event has probability 1 which means trivial conditionalization—on its realization credences of the driver do not change. Therefore everything adds up to normality.
I believe I’ve solved the problem. I’m going to include this in my next post on probability theory fundamentals but here is the gist of it.
The meta-problem here is that people were looking for the answer in the wrong place, searching for a different decision making algorithm while what we actually needed is a satisfying epistemological account. The core crux isn’t in decision theory, but on a previous step—in probability theory.
That should be a clue that “probability of being at X” isn’t, in fact, a thing. That event “I’m at X and not at Y” is ill defined. In other words, that the problem is with our intuition that mistakenly assumes that there should be such an event, and with the lack of a strict epistemological framework that would allow us to answer questions such as “Does this mathematical model fit the setting?” and “Is this event well-defined?”
Here I provide this framework. An event is a conditional statement of a belief updating algorithm, that has to return clear True or False in every iteration of probability experiment, approximating some process to the best of our knowledge—in our case Absent-Minded Driver problem. Statement “I’m at X and not at Y” doesn’t satisfy this condition for Absent-Minded Driver as in some iterations of the experiment the driver will be at both. Therefore it’s not an event, and can not lead to conditionalization.
The event that is well defined in every iteration of the experiment is “I’m at X or Y”. This event has probability 1 which means trivial conditionalization—on its realization credences of the driver do not change. Therefore everything adds up to normality.