I live my life under the assumption that I do have achievable values. If I had no values that I could achieve and I was truly indifferent between all possible outcomes, then my decisions do not matter. I can ignore any such possible worlds in my decision theory.
We don’t know what a perfectly rational agent would do if confronted with all goals being epistemically irrational, but there is no instrumental value in answering this question because if we found ourselves in such a situation we wouldn’t care.
Is that a fair summary? I don’t yet know if I agree or disagree, right now I’m just making sure I understand your position.
I live my life under the assumption that I do have achievable values. If I had no values that I could achieve and I was truly indifferent between all possible outcomes, then my decisions do not matter. I can ignore any such possible worlds in my decision theory.
So, to clarify:
We don’t know what a perfectly rational agent would do if confronted with all goals being epistemically irrational, but there is no instrumental value in answering this question because if we found ourselves in such a situation we wouldn’t care.
Is that a fair summary? I don’t yet know if I agree or disagree, right now I’m just making sure I understand your position.
I believe that is a fair summary of my beliefs.
Side note: Before I was convinced by EY’s stance on compatibilism of free will, I believed in free will for a similar reason.