You might be interested in (i.a.) Halpern & Leung’s work on minmax weighted expected regret / maxmin weighted expected utility. TLDR: assign a weight αp∈[0,1] to each probability in the representor p∈P and then pick the action that maximizes the minimum (or infimum) weighted expected utility across all current hypotheses.
a∗∈argmaxa∈Aminp∈PαpEp[U∣A=a]
An equivalent formulation involves using subprobability measures (sum up to ≤1).
Updating on certain evidence (i.e., concrete measurable sets E⊂Ω, as opposed to Jeffrey updating or virtual evidence) involves updating each hypothesis p to p(⋅∣E) the usual way, but the weights get updated roughly according to how well p predicted the event E. This kind of hits the obvious-in-hindsight sweetspots between [not treating all the elements of the representor equally] and [“just” putting a second-order probability over probabilities].
(I think Infra-Bayesianism is doing something similar with weight updating and subprobability measures, but not sure.)
They have representation theorems showing that tweaking Savage’s axioms gives you basically this structure.
Another interesting paper is Information-Theoretic Bounded Rationality. They frame approximate EU maximization as a statistical sampling problem, with an inverse temperature parameter α, which allows for interpolating between “pessimism”/”assumption of adversariality”/minmax (as α→−∞), indifference/stochasticity/usual EU maximization (as α→0), and “optimism”/”assumption of ‘friendliness’”/maxmax (as α→∞).
Regarding the discussion about the (im)precision threadmill (e.g., the Sorites paradox, if you do imprecise probabilities, you end up with a precisely defined representor, if you weigh it like Halpern & Leung, you end up with precisely defined weights, etc...), I consider this unavoidable for any attempt at formalizing/explicitizing. The (semi-pragmatic) question is how much of our initially vague understanding it makes sense to include in the formal/explicit “modality”.
You might be interested in (i.a.) Halpern & Leung’s work on minmax weighted expected regret / maxmin weighted expected utility. TLDR: assign a weight αp∈[0,1] to each probability in the representor p∈P and then pick the action that maximizes the minimum (or infimum) weighted expected utility across all current hypotheses.
a∗∈argmaxa∈Aminp∈PαpEp[U∣A=a]
An equivalent formulation involves using subprobability measures (sum up to ≤1).
Updating on certain evidence (i.e., concrete measurable sets E⊂Ω, as opposed to Jeffrey updating or virtual evidence) involves updating each hypothesis p to p(⋅∣E) the usual way, but the weights get updated roughly according to how well p predicted the event E. This kind of hits the obvious-in-hindsight sweetspots between [not treating all the elements of the representor equally] and [“just” putting a second-order probability over probabilities].
(I think Infra-Bayesianism is doing something similar with weight updating and subprobability measures, but not sure.)
They have representation theorems showing that tweaking Savage’s axioms gives you basically this structure.
Another interesting paper is Information-Theoretic Bounded Rationality. They frame approximate EU maximization as a statistical sampling problem, with an inverse temperature parameter α, which allows for interpolating between “pessimism”/”assumption of adversariality”/minmax (as α→−∞), indifference/stochasticity/usual EU maximization (as α→0), and “optimism”/”assumption of ‘friendliness’”/maxmax (as α→∞).
Regarding the discussion about the (im)precision threadmill (e.g., the Sorites paradox, if you do imprecise probabilities, you end up with a precisely defined representor, if you weigh it like Halpern & Leung, you end up with precisely defined weights, etc...), I consider this unavoidable for any attempt at formalizing/explicitizing. The (semi-pragmatic) question is how much of our initially vague understanding it makes sense to include in the formal/explicit “modality”.