I agree. In particular I’ve noticed that a lots of frequentist methods can be described in terms of the Bayesian method by replacing a step where you take the expected value of some quantity by instead taking the worst case of this quantity. This improves the computational efficiency by avoiding a difficult integral. Of course we choose the worst case rather than the best case because humans are risk averse. This shows an interesting fact: Bayesian methods are the same for every agent, but when resource constraints force you away from Bayesian methods your inferences can end up depending on your utility function. (And because humans are risk averse the frequentist method often looks like it is being more responsible and conservative than the Bayesian method even though the Bayesian method will in fact always produce predictions with an optimal amount of risk-averseness if you use it with an accurate utility function.)
I agree. In particular I’ve noticed that a lots of frequentist methods can be described in terms of the Bayesian method by replacing a step where you take the expected value of some quantity by instead taking the worst case of this quantity. This improves the computational efficiency by avoiding a difficult integral. Of course we choose the worst case rather than the best case because humans are risk averse. This shows an interesting fact: Bayesian methods are the same for every agent, but when resource constraints force you away from Bayesian methods your inferences can end up depending on your utility function. (And because humans are risk averse the frequentist method often looks like it is being more responsible and conservative than the Bayesian method even though the Bayesian method will in fact always produce predictions with an optimal amount of risk-averseness if you use it with an accurate utility function.)