This is not how many decisions feel to me—many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it’s distinct in time from the action itself.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future.
See Spohn’s example about believing (“deciding”) you won’t wear shorts next winter:
One might object that we often do speak of probabilities for acts. For instance, I might say: “It’s very unlikely that I shall wear my shorts outdoors next winter.” But I do not think that such an utterance expresses a genuine probability for an act; rather I would construe this utterance as expressing that I find it very unlikely to get into a decision situation next winter in which it would be best to wear my shorts outdoors, i.e. that I find it very unlikely that it will be warmer than 20°C next winter, that someone will offer me DM 1000.- for wearing shorts outdoors, or that fashion suddenly will prescribe wearing shorts, etc. Besides, it is characteristic of such utterances that they refer only to acts which one has not yet to decide upon. As soon as I have to make up my mind whether to wear my shorts outdoors or not, my utterance is out of place.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future
Correct. There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled “decision”. I decide to attend a play in London next month. This is an intent and a belief. It’s not guaranteed. I buy tickets for the train and for the show. The sub-decisions to click “buy” on the websites are in the past, and therefore committed. The overall decision has more evidence, and gets more confident. The cancelation window passes. Again, a bit more evidence. I board the train—that sub-decision is in the past, so is committed, but there’s STILL some chance I won’t see the play.
Anything you call a “decision” that hasn’t actually already happened is really a prediction or an intent. Even DURING an action, you only have intent and prediction. While the impulse is traveling down my arm to click the mouse, the power could still go out and I don’t buy the ticket. There is past, which is pretty immutable, and future, which cannot be known precisely.
I think this is compatible with Spohn’s example (at least the part you pasted), and contradicts OP’s claim that “you did not make a decision” for all the cases where the future is uncertain. ALL decisions are actually predictions, until they are in the past tense. One can argue whether that’s a p(1) prediction or a different thing entirely, but that doesn’t matter to this point.
”If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.” is actually good directional advice in many cases, but it’s factually simply incorrect.
That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future.
See Spohn’s example about believing (“deciding”) you won’t wear shorts next winter:
Correct. There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled “decision”. I decide to attend a play in London next month. This is an intent and a belief. It’s not guaranteed. I buy tickets for the train and for the show. The sub-decisions to click “buy” on the websites are in the past, and therefore committed. The overall decision has more evidence, and gets more confident. The cancelation window passes. Again, a bit more evidence. I board the train—that sub-decision is in the past, so is committed, but there’s STILL some chance I won’t see the play.
Anything you call a “decision” that hasn’t actually already happened is really a prediction or an intent. Even DURING an action, you only have intent and prediction. While the impulse is traveling down my arm to click the mouse, the power could still go out and I don’t buy the ticket. There is past, which is pretty immutable, and future, which cannot be known precisely.
I think this is compatible with Spohn’s example (at least the part you pasted), and contradicts OP’s claim that “you did not make a decision” for all the cases where the future is uncertain. ALL decisions are actually predictions, until they are in the past tense. One can argue whether that’s a p(1) prediction or a different thing entirely, but that doesn’t matter to this point.
”If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.” is actually good directional advice in many cases, but it’s factually simply incorrect.
That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.