not sure what the best arguments are, but maybe a Bayesian almost always ‘should’ have rapidly-enough decaying tails that some quantile is equivalent to EV...?
contra Pascal’s wager style failures?
finitude of evidence can’t support arbitrarily large hypotheses...?
discount rates
maybe exponential or hyperbolic (or other) discount rate over time steps could lead to something like logarithmic preferences?
my intuition says nope but I’ve not run the maths
I would be surprised if this worked over lots of different scales, but maybe on particular configurations
if those configurations happened to be plausible ancestrally then...?
value of information
maybe some heuristic relating to value of information makes it convergently instrumental to have roughly logarithmic preferences
you don’t learn anything more if you ‘go to zero’...?
Three ideas, not at all worked through
quantilisation and robustness
quantilising is generally considered ‘robust’
not sure what the best arguments are, but maybe a Bayesian almost always ‘should’ have rapidly-enough decaying tails that some quantile is equivalent to EV...?
contra Pascal’s wager style failures?
finitude of evidence can’t support arbitrarily large hypotheses...?
discount rates
maybe exponential or hyperbolic (or other) discount rate over time steps could lead to something like logarithmic preferences?
my intuition says nope but I’ve not run the maths
I would be surprised if this worked over lots of different scales, but maybe on particular configurations
if those configurations happened to be plausible ancestrally then...?
value of information
maybe some heuristic relating to value of information makes it convergently instrumental to have roughly logarithmic preferences
you don’t learn anything more if you ‘go to zero’...?
maybe cashes out something like quantilising?