I think we need to reduce “surprise” and “explanation” first. I suggest they have to do with bounded rationality and logical uncertainty. These concepts don’t seem to exist in decision theories with logical omniscience.
Surprise seems to be the output of some heuristic that tell you when you may have made a cognitive error or taken a computational shortcut that turns out to be wrong (i.e., you find yourself in a situation where you had previously computed to have low probability) and should go back and recheck your logic. After you’ve found such an error and have fixed it, perhaps you call the fix an explanation (i.e., it “explains” why the low computed probability was an error).
In UDT, there ought to be equivalents of surprise and explanation, although I’m too tired to think of them right now. I’ll try again later.
I think we need to reduce “surprise” and “explanation” first. I suggest they have to do with bounded rationality and logical uncertainty. These concepts don’t seem to exist in decision theories with logical omniscience.
Surprise seems to be the output of some heuristic that tell you when you may have made a cognitive error or taken a computational shortcut that turns out to be wrong (i.e., you find yourself in a situation where you had previously computed to have low probability) and should go back and recheck your logic. After you’ve found such an error and have fixed it, perhaps you call the fix an explanation (i.e., it “explains” why the low computed probability was an error).
In UDT, there ought to be equivalents of surprise and explanation, although I’m too tired to think of them right now. I’ll try again later.