My position, which I keep finding myself arguing here, is that this isn’t even a meaningful question. Rationality applies to beliefs not terminal values. I doesn’t even make sense to wonder if there is a Bayesian way to decide what to care about.
We sometimes speak of rational reasons for priorities or non-terminal values but what we really mean is that we have rational reasons to believe that fulfilling that priority or non-terminal value fulfills our terminal value. A non-terminal normative claim is a mixed is/ought claim and the word rational is describing the ‘is’ part, not the ‘ought’ part.
My position, which I keep finding myself arguing here, is that this isn’t even a meaningful question. Rationality applies to beliefs not terminal values. I doesn’t even make sense to wonder if there is a Bayesian way to decide what to care about.
We sometimes speak of rational reasons for priorities or non-terminal values but what we really mean is that we have rational reasons to believe that fulfilling that priority or non-terminal value fulfills our terminal value. A non-terminal normative claim is a mixed is/ought claim and the word rational is describing the ‘is’ part, not the ‘ought’ part.