It seems that the problems to do with rationality have correct solutions, but not the problems to do with values.
Why? vNM utility maximization seems like a philosophical idea that’s clearly on the right track. There might be other such ideas about being friendly to imperfect agents.
vNM is rationality—decisions.
Being friendly to imperfect agents is something I’ve seen no evidence for; it’s very hard to even define.
It seems that the problems to do with rationality have correct solutions, but not the problems to do with values.
Why? vNM utility maximization seems like a philosophical idea that’s clearly on the right track. There might be other such ideas about being friendly to imperfect agents.
vNM is rationality—decisions.
Being friendly to imperfect agents is something I’ve seen no evidence for; it’s very hard to even define.