You don’t have to believe the coherence arguments. Perhaps the best approach is to build something that isn’t (at least, isn’t explicitly/directly) an expected utility maximizer. Then the challenge is to come up with a way to build a thing that does stuff you want without even having that bit of foundation. This seems likely harder than the world where the best approach is a clever trick that fixes it for expected utility maximizers.
Perhaps the best approach is to build something that isn’t (at least, isn’t explicitly/directly) an expected utility maximizer. Then the challenge is to come up with a way to build a thing that does stuff you want without even having that bit of foundation.
You don’t have to believe the coherence arguments. Perhaps the best approach is to build something that isn’t (at least, isn’t explicitly/directly) an expected utility maximizer. Then the challenge is to come up with a way to build a thing that does stuff you want without even having that bit of foundation. This seems likely harder than the world where the best approach is a clever trick that fixes it for expected utility maximizers.
Yep, this is what I try to do here!
I think that’s reasonable on priors, but these papers plus the empirical track record suggests there’s no clever trick that makes EUMs corrigible.