The problem is when people decide that they believe / do not believe some proposition P, and then consider only the expected utility of the case where P is true / false.
Reducing a probability to a binary decision clearly loses information. You can’t argue with that.
Reducing a probability to a binary decision clearly loses information. You can’t argue with that.
No, I can’t. But I can argue that no reduction occurs.
To be fair, I see your point in the case of politicians or people who are otherwise indisposed to changing their minds: once they say they believe something there are costs to subsequently saying they don’t. That effectively makes it a binary distinction for them.
However, for people not in such situations, if I hear they believe X, that gives me new information about their internal state (namely, that they give X something like 55-85% chance of being the case). This doesn’t lose information. I think this comprises most uses of believe/disbelieve.
So I would argue that it’s not the believe/disbelieve distinction that is the problem; it’s the feedback loop that results from us not letting people change their minds that causes issues to be forced into yes/no terms, combined with the need for politicians/public figures to get their thought to fit into a soundbite. I don’t see how using other terms will ameliorate either of those problems.
The problem is when people decide that they believe / do not believe some proposition P, and then consider only the expected utility of the case where P is true / false.
Agree that this is widespread, and is faulty thinking. And my $.02, which you should feel free to ignore: your main post would be clearer, I think, if you focused more on the math of why this is so: find an example where different actions are appropriate based on the probability, and collapsing the probability into a 1 or 0 forces the choice of an inappropriate action; explain the example thoroughly; and only then name the concept with the labels believe/disbelieve. Hearing them right from the start put me on the wrong trail entirely.
I thought this was a post about language usage, but it’s actually a post about how not to do math with probabilities.
The problem is when people decide that they believe / do not believe some proposition P, and then consider only the expected utility of the case where P is true / false.
Reducing a probability to a binary decision clearly loses information. You can’t argue with that.
No, I can’t. But I can argue that no reduction occurs.
To be fair, I see your point in the case of politicians or people who are otherwise indisposed to changing their minds: once they say they believe something there are costs to subsequently saying they don’t. That effectively makes it a binary distinction for them.
However, for people not in such situations, if I hear they believe X, that gives me new information about their internal state (namely, that they give X something like 55-85% chance of being the case). This doesn’t lose information. I think this comprises most uses of believe/disbelieve.
So I would argue that it’s not the believe/disbelieve distinction that is the problem; it’s the feedback loop that results from us not letting people change their minds that causes issues to be forced into yes/no terms, combined with the need for politicians/public figures to get their thought to fit into a soundbite. I don’t see how using other terms will ameliorate either of those problems.
Agree that this is widespread, and is faulty thinking. And my $.02, which you should feel free to ignore: your main post would be clearer, I think, if you focused more on the math of why this is so: find an example where different actions are appropriate based on the probability, and collapsing the probability into a 1 or 0 forces the choice of an inappropriate action; explain the example thoroughly; and only then name the concept with the labels believe/disbelieve. Hearing them right from the start put me on the wrong trail entirely.
I thought this was a post about language usage, but it’s actually a post about how not to do math with probabilities.
Right. I’m not talking about the effect of saying “I believe X” vs. “X”.
It probably would have been clearer to use an example.