That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.