But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.
Some people might reasonably, and coherently, value valuing incoherent or unreachable values (in, so to say, compartmentalized good faith—that is, you might know that an algorithm is incoherent, prone to dutch-booking, etc, but it still feels just fine from the inside) - just as some people think that belief in belief might have worth of its own, are consciously hypocritical, etc. Therefore, I’m against such one-level optimizing-away of already held values; if you see that some specific value is total mess, you might instead just compartmentalize a little, etc.
(I believe I’ve already mentioned the above to you at some point.)
BTW, a classic example of people valuing an unreachable value: “Love thy enemies”. (Once I had an awesome experience meditating on it.)
Some people might reasonably, and coherently, value valuing incoherent or unreachable values (in, so to say, compartmentalized good faith—that is, you might know that an algorithm is incoherent, prone to dutch-booking, etc, but it still feels just fine from the inside) - just as some people think that belief in belief might have worth of its own, are consciously hypocritical, etc.
Therefore, I’m against such one-level optimizing-away of already held values; if you see that some specific value is total mess, you might instead just compartmentalize a little, etc.
(I believe I’ve already mentioned the above to you at some point.)
BTW, a classic example of people valuing an unreachable value: “Love thy enemies”. (Once I had an awesome experience meditating on it.)