Your criticism of preferences seem to be in terms of preferences. Nobody would be able to apply them to themselves, because they would not make sense if your preferences are already different. For example, it doesn’t make sense to say that a preference should be discounted because it doesn’t value your life, since not valuing your life is the subjectively right thing to do if you prefer it. The exceptions I noticed in your list are “maybe it’s not actually your preference” and “maybe it conflicts with another of your preferences.”
On behavior, we already have ways of getting behavior from beliefs and preferences—a consistent pattern of behavior is equivalent to holding certain axioms and/or wanting certain results—for example, rational behavior is preference-maximizing. To ignore this powerful tool and fall back on subjective criticism seems like a bad choice.
Your criticism of preferences seem to be in terms of preferences. Nobody would be able to apply them to themselves, because they would not make sense if your preferences are already different.
I agree, if we could start the process with the subject’s true preferences, and the subject were rational. Instead it seems we have to start with the results from introspection, which might be wrong. I’m trying to understand what to do about that. I think people should take the possibility of incorrect introspection seriously.
On behavior, we already have ways of getting behavior from beliefs and preferences—a consistent pattern of behavior is equivalent to holding certain axioms and/or wanting certain results—for example, rational behavior is preference-maximizing. To ignore this powerful tool and fall back on subjective criticism seems like a bad choice.
I agree with you there. Several of the criticisms of behavior I listed were about behavior not matching stated or inferred preferences, and perhaps in principle that’s all we need, just as the criticisms of belief can be simplified in principle down to Bayes’ rule and a prior. In practice, people sometimes do a poor job of enacting their preferences, and IMO subjective criticism helps there.
I agree, if we could start the process with the subject’s true preferences, and the subject were rational. Instead it seems we have to start with the results from introspection, which might be wrong. I’m trying to understand what to do about that. I think people should take the possibility of incorrect introspection seriously.
Then you’re just dealing with beliefs about preferences, which are a kind of beliefs, so this reduces to PCR for beliefs.
Then you’re just dealing with beliefs about preferences, which are a kind of beliefs, so this reduces to PCR for beliefs.
You’re right there. And PCR for beliefs is trivial in principle, just use Bayes’ rule and the Universal Prior based on the programming language of your choice. Nobody seems to be good enough at actually evaluating that prior to care much about which programming language you use to represent the hypotheses yet.
So if someone introspects and says they will make choices as though they have unbounded utility, and the math makes it seem impossible for them to really do that, then I can reply “I don’t believe you” and move on, just as though they had professed believing in an invisible dragon in their garage.
That’s a really simple solution to get rid of a large pile of garbage, contingent on the math working out right. Thanks. I’ll pay more attention to the math.
ETA: I edited the OP to point to this comment. This was an excellent outcome from the conversation, by the way. LessWrong works.
Your criticism of preferences seem to be in terms of preferences. Nobody would be able to apply them to themselves, because they would not make sense if your preferences are already different. For example, it doesn’t make sense to say that a preference should be discounted because it doesn’t value your life, since not valuing your life is the subjectively right thing to do if you prefer it. The exceptions I noticed in your list are “maybe it’s not actually your preference” and “maybe it conflicts with another of your preferences.”
On behavior, we already have ways of getting behavior from beliefs and preferences—a consistent pattern of behavior is equivalent to holding certain axioms and/or wanting certain results—for example, rational behavior is preference-maximizing. To ignore this powerful tool and fall back on subjective criticism seems like a bad choice.
I agree, if we could start the process with the subject’s true preferences, and the subject were rational. Instead it seems we have to start with the results from introspection, which might be wrong. I’m trying to understand what to do about that. I think people should take the possibility of incorrect introspection seriously.
I agree with you there. Several of the criticisms of behavior I listed were about behavior not matching stated or inferred preferences, and perhaps in principle that’s all we need, just as the criticisms of belief can be simplified in principle down to Bayes’ rule and a prior. In practice, people sometimes do a poor job of enacting their preferences, and IMO subjective criticism helps there.
Then you’re just dealing with beliefs about preferences, which are a kind of beliefs, so this reduces to PCR for beliefs.
You’re right there. And PCR for beliefs is trivial in principle, just use Bayes’ rule and the Universal Prior based on the programming language of your choice. Nobody seems to be good enough at actually evaluating that prior to care much about which programming language you use to represent the hypotheses yet.
So if someone introspects and says they will make choices as though they have unbounded utility, and the math makes it seem impossible for them to really do that, then I can reply “I don’t believe you” and move on, just as though they had professed believing in an invisible dragon in their garage.
That’s a really simple solution to get rid of a large pile of garbage, contingent on the math working out right. Thanks. I’ll pay more attention to the math.
ETA: I edited the OP to point to this comment. This was an excellent outcome from the conversation, by the way. LessWrong works.
(obligatory xkcd reference)
LessWrong: It works, bitches.