It’s intuitively plausible that a Solomonoff type prior would (at least approximately) yield such an assumption.
But even if “intuitively plausible” equates to, say, 0.9999 probability, that’s insufficient to disarm Pascal’s Mugging. I think there’s at least 0.0001 chance that a better approximate prior distribution for “value of an action” is one with a “heavy tail”, e.g., one with infinite variance.
Sure, the present post deals only with the case where the value that one assigns to an action obeys a (log)-normal distribution over actions. In the case that you describe, there may (or may not) be a different way to disarm Pascal Mugging.
But even if “intuitively plausible” equates to, say, 0.9999 probability, that’s insufficient to disarm Pascal’s Mugging. I think there’s at least 0.0001 chance that a better approximate prior distribution for “value of an action” is one with a “heavy tail”, e.g., one with infinite variance.
Sure, the present post deals only with the case where the value that one assigns to an action obeys a (log)-normal distribution over actions. In the case that you describe, there may (or may not) be a different way to disarm Pascal Mugging.