I don’t know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).
Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you’re begging on the streets, not so much if you’re a billionaire).
However, something like the Allais paradox can be mathematically proven to not arise from any utility function, however non-linear, and therefore is not explainable by risk aversion. Uncertainty aversion is roughly speaking my name for whatever-it-is-that-causes-people-to-choose-irrationally-on-Allais. It seems to work be causing people to strongly prefer certain gains to high probability gains, and much more weakly prefer high-probability gains to low-probability gains.
For the past few weeks I have been in an environment where casual betting for moderate sized amounts ($1-2 on the low end, $100 on the high end) is common, and disentangling risk-aversion from uncertainty aversion in my decision process has been a constant difficulty.
(I think) a bas would change your predictions/assessments of what is true in the direction of that bias, but a preference would determine what you want irrespective of the way the world currently is.
if it affects how they interpret evidence, it’s a bias, if it affects just their decisions it’s a preference.
The problem is that in practice assigning mental states to one or the other of these categories can get rather arbitrary. Especially when aliefs get involved.
I didn’t say it’s how you determine which is which in practice, I said (or meant to) it’s what I think each means. (Admittedly this isn’t the answer to Jayson’s question, but I wasn’t answering to that. I didn’t mean to say that everything that affects decisions is a preference, I just realized it might be interpreted that way, but obviously not everything that affects how you interpret evidence is a bias, either.)
I’m not sure I understand what you mean about aliefs. I thought the point of aliefs is that they’re not beliefs. E.g., if I alieve that I’m in danger because there’s a scary monster on TV, then my beliefs are still accurate (I know that I’m not in danger), and if my pulse raises or I scream or something, that’s neither bias nor preference, it’s involuntary.
The tricky part is if I want (preference) to go to sleep later, but I don’t because I’m too scared to turn off the light, even though I know there aren’t monsters in the closet. I’m not sure what that’s called, but I’m not sure I’d call it a bias (unless maybe I don’t notice I’m scared and it influences my beliefs) nor a preference (unless maybe I decide not to go to sleep right now because I’d rather not have bad dreams). But it doesn’t have to be a dichotomy, so I have no problem assigning this case to a third (unnamed) category.
Do you have an example of alief involvement that’s more ambiguous? I’m not sure if you mean “arbitrary” in practice or in theory or both.
Look, if (hypothetically) I can’t go to sleep because my head hurts when I lie down, that’s neither a bias nor a preference. Why is it different if the reason is fear and I know the fear is not justified? They’re both physiological reactions. Why do I have to classify one in the bias/preference dichotomy and not the other?
If it doesn’t end up accomplishing anything, it’s just a bias. If it causes them to believe things that result in something being accomplished, then I think it’s still technically a bias, and their implicit and explicit preferences are different.
I think most biases fall a little into both categories. I guess that means that it’s partially a preference and partially just a bias.
How do you tell the difference between a preference and a bias (in other people)?
I can’t even easily, reliably do that in myself!
Would you have any specific example?
I don’t know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).
Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you’re begging on the streets, not so much if you’re a billionaire).
However, something like the Allais paradox can be mathematically proven to not arise from any utility function, however non-linear, and therefore is not explainable by risk aversion. Uncertainty aversion is roughly speaking my name for whatever-it-is-that-causes-people-to-choose-irrationally-on-Allais. It seems to work be causing people to strongly prefer certain gains to high probability gains, and much more weakly prefer high-probability gains to low-probability gains.
For the past few weeks I have been in an environment where casual betting for moderate sized amounts ($1-2 on the low end, $100 on the high end) is common, and disentangling risk-aversion from uncertainty aversion in my decision process has been a constant difficulty.
(I think) a bas would change your predictions/assessments of what is true in the direction of that bias, but a preference would determine what you want irrespective of the way the world currently is.
Or, more precisely, irrespective of the way you want the world to be.
I.e., if it affects how they interpret evidence, it’s a bias, if it affects just their decisions it’s a preference.
The problem is that in practice assigning mental states to one or the other of these categories can get rather arbitrary. Especially when aliefs get involved.
I didn’t say it’s how you determine which is which in practice, I said (or meant to) it’s what I think each means. (Admittedly this isn’t the answer to Jayson’s question, but I wasn’t answering to that. I didn’t mean to say that everything that affects decisions is a preference, I just realized it might be interpreted that way, but obviously not everything that affects how you interpret evidence is a bias, either.)
I’m not sure I understand what you mean about aliefs. I thought the point of aliefs is that they’re not beliefs. E.g., if I alieve that I’m in danger because there’s a scary monster on TV, then my beliefs are still accurate (I know that I’m not in danger), and if my pulse raises or I scream or something, that’s neither bias nor preference, it’s involuntary.
The tricky part is if I want (preference) to go to sleep later, but I don’t because I’m too scared to turn off the light, even though I know there aren’t monsters in the closet. I’m not sure what that’s called, but I’m not sure I’d call it a bias (unless maybe I don’t notice I’m scared and it influences my beliefs) nor a preference (unless maybe I decide not to go to sleep right now because I’d rather not have bad dreams). But it doesn’t have to be a dichotomy, so I have no problem assigning this case to a third (unnamed) category.
Do you have an example of alief involvement that’s more ambiguous? I’m not sure if you mean “arbitrary” in practice or in theory or both.
Yes it does because you ultimately have to choose one or the other.
Look, if (hypothetically) I can’t go to sleep because my head hurts when I lie down, that’s neither a bias nor a preference. Why is it different if the reason is fear and I know the fear is not justified? They’re both physiological reactions. Why do I have to classify one in the bias/preference dichotomy and not the other?
Pretty much. Also, most preferences are 1. more noticable and 2. often self-protected, i.e. ” I want to keep wanting this thing”.
If it doesn’t end up accomplishing anything, it’s just a bias. If it causes them to believe things that result in something being accomplished, then I think it’s still technically a bias, and their implicit and explicit preferences are different.
I think most biases fall a little into both categories. I guess that means that it’s partially a preference and partially just a bias.