“believe x” approximately means “think it is probable/highly probable that x” to me. It would seem you don’t share this definition, as it makes your post into gibberish; what does “believe x” mean?
That is approximately what “believe x” means, and the post would be gibberish if it did not mean that.
It’s the wrong question. The question is what steps are worth taking according to your assigned probabilities and expected-value computations.
This appears to be a non-sequitar to me; it seems entirely natural to me that both whether you think X is probable AND what you ought to in the case that you think X is probable are both reasonable (and, depending, necessary) questions.
Or maybe I completely don’t understand what you wrote. Sorry if I came off brusquely.
The probability you assign to X is relevant. The point is that once you use the “belief” frame, you’re throwing away that probability in favor of a “believe / don’t believe” duality.
Of course you throw out the details when you choose a word. The same happens when you choose any other word. The same argument seems to chasten the use of a word “tiger” when describing a tiger, since that throws away an exact probability estimate of the apparition being a tiger.
In my head, when I hear people they say “believe” something, I take that to mean they think it is 55% to 85% probable (numbers not exact, obviously) (Outside of religious contexts, of course). It somehow didn’t occur to me that that’s probably a weird thing to do.
Well, PhilGoetz is claiming (if I am finally understanding him) that casting things in the light of believe/disbelieve loses information. To me—and to you also, it would seem—it gains information. It could be context dependent, but I can’t think of a context* in which I would take it to mean something other than a statement about how probable something is, including the examples Phil gave in is post. We can’t all be right...
In general I agree with the premise that things can be forced into bad terms by a less-than-helpful question, but I’m not at all convinced that this is a good example. However, I know that when I think to, I use the word “think” instead of “believe” because I think it’s clearer, so on some level I must agree that “believe” leaves some sort of ambiguities.
*I’m completely excluding religious usages from consideration and will not mention this caveat again.
Well, PhilGoetz is claiming (if I am finally understanding him) that casting things in the light of believe/disbelieve loses information. To me—and to you also, it would seem—it gains information.
I agree. Compare this with computation of a factorial function. You start with knowing that the function is f(n)=if(n>1) n*f(n-1) else 1. Then you find out that f(1)=1, then that f(2)=2, etc. With each step, you are not taking new data out of environment, you are working from what you already have, simply juggling the numbers, but you gain new information.
For more on this view, see S. Abramsky (2008). `Information, processes and games’ (PDF). In P. Adriaans & J. Benthem (eds.), Handbook of the philosophy of information. Elsevier Science Publishers.
I agree. Compare this with computation of a factorial function. You start with knowing that the function is f(n)=if(n>1) n*f(n-1) else 1. Then you find out that f(1)=1, then that f(2)=2, etc. With each step, you are not taking new data out of environment, you are working from what you already have, simply juggling the numbers, but you gain new information.
That’s an invalid comparison. That’s a mathematical operation that doesn’t involve information loss, and hence has nothing to do with this discussion.
The problem is when people decide that they believe / do not believe some proposition P, and then consider only the expected utility of the case where P is true / false.
Reducing a probability to a binary decision clearly loses information. You can’t argue with that.
Reducing a probability to a binary decision clearly loses information. You can’t argue with that.
No, I can’t. But I can argue that no reduction occurs.
To be fair, I see your point in the case of politicians or people who are otherwise indisposed to changing their minds: once they say they believe something there are costs to subsequently saying they don’t. That effectively makes it a binary distinction for them.
However, for people not in such situations, if I hear they believe X, that gives me new information about their internal state (namely, that they give X something like 55-85% chance of being the case). This doesn’t lose information. I think this comprises most uses of believe/disbelieve.
So I would argue that it’s not the believe/disbelieve distinction that is the problem; it’s the feedback loop that results from us not letting people change their minds that causes issues to be forced into yes/no terms, combined with the need for politicians/public figures to get their thought to fit into a soundbite. I don’t see how using other terms will ameliorate either of those problems.
The problem is when people decide that they believe / do not believe some proposition P, and then consider only the expected utility of the case where P is true / false.
Agree that this is widespread, and is faulty thinking. And my $.02, which you should feel free to ignore: your main post would be clearer, I think, if you focused more on the math of why this is so: find an example where different actions are appropriate based on the probability, and collapsing the probability into a 1 or 0 forces the choice of an inappropriate action; explain the example thoroughly; and only then name the concept with the labels believe/disbelieve. Hearing them right from the start put me on the wrong trail entirely.
I thought this was a post about language usage, but it’s actually a post about how not to do math with probabilities.
Well, PhilGoetz is claiming (if I am finally understanding him) that casting things in the light of believe/disbelieve loses information. [...] We can’t all be right...
I’m pretty sure both of us are right in this case. I agree that “casting things in the light of” believe/disbelieve can be unacceptably lossy. I was responding to you claiming it’s a “weird thing to do” to infer a 55-85 interval based on common uses of the word “believe”. Same word, but context seems to derive different concepts. AFAIK, people simply don’t tend to use the word in the 55-85 sense when they’re talking about “important” things (e.g., you don’t often hear things in the tone of, “I believe global warming is a serious problem, let me get back to you on that”).
However, I know that when I think to, I use the word “think” instead of “believe” because I think it’s clearer, so on some level I must agree that “believe” leaves some sort of ambiguities.
In common usage, “think” and “believe” seem only to differ by degrees. For me, re-reading my above examples under s/believe/think/ seems to weaken the connoted confidence.
I was responding to you claiming it’s a “weird thing to do” to infer a 55-85 interval based on common uses of the word “believe”.
I thought I must be weird since I seem to have been the only one that completely didn’t understand the post initially. But perhaps I just lack this other usage entirely, or perhaps I still don’t agree. (See my response to Phil above: http://lesswrong.com/lw/10a/you_cant_believe_in_bayes/ssc)
In common usage, “think” and “believe” seem only to differ by degrees. For me, re-reading my above examples under s/believe/think/ seems to weaken the connoted confidence.
Agree. I don’t like to give the impression that I’m more confident than I am.
Hmm, I think that lavalamp was probably thinking about the title, while I was thinking about the contents. The title is a rhetorical hook. You can believe in Bayes’ theorem in the ordinary sense of the word ‘believe’.
That is approximately what “believe x” means, and the post would be gibberish if it did not mean that.
This appears to be a non-sequitar to me; it seems entirely natural to me that both whether you think X is probable AND what you ought to in the case that you think X is probable are both reasonable (and, depending, necessary) questions.
Or maybe I completely don’t understand what you wrote. Sorry if I came off brusquely.
The probability you assign to X is relevant. The point is that once you use the “belief” frame, you’re throwing away that probability in favor of a “believe / don’t believe” duality.
Of course you throw out the details when you choose a word. The same happens when you choose any other word. The same argument seems to chasten the use of a word “tiger” when describing a tiger, since that throws away an exact probability estimate of the apparition being a tiger.
That’s a problem that’s very difficult to avoid. But the more general case, which I am discussing here, is often easy to avoid.
Ah, that explains it.
In my head, when I hear people they say “believe” something, I take that to mean they think it is 55% to 85% probable (numbers not exact, obviously) (Outside of religious contexts, of course). It somehow didn’t occur to me that that’s probably a weird thing to do.
I don’t think it’s particularly weird. For trivial or everyday propositions, the word often seems to denote that kind of interval:
“I believe the show is on the 4th. Let me check.”
“I believe it stars Philip Seymour Hoffman.”
“Hold on. I believe my phone is ringing.”
In statements like these, “believe” plays the role of a qualifier. To express (1-epsilon) certainty, we just omit all qualifiers: “It’s on the 4th”.
Well, PhilGoetz is claiming (if I am finally understanding him) that casting things in the light of believe/disbelieve loses information. To me—and to you also, it would seem—it gains information. It could be context dependent, but I can’t think of a context* in which I would take it to mean something other than a statement about how probable something is, including the examples Phil gave in is post. We can’t all be right...
In general I agree with the premise that things can be forced into bad terms by a less-than-helpful question, but I’m not at all convinced that this is a good example. However, I know that when I think to, I use the word “think” instead of “believe” because I think it’s clearer, so on some level I must agree that “believe” leaves some sort of ambiguities.
*I’m completely excluding religious usages from consideration and will not mention this caveat again.
I agree. Compare this with computation of a factorial function. You start with knowing that the function is f(n)=if(n>1) n*f(n-1) else 1. Then you find out that f(1)=1, then that f(2)=2, etc. With each step, you are not taking new data out of environment, you are working from what you already have, simply juggling the numbers, but you gain new information.
For more on this view, see S. Abramsky (2008). `Information, processes and games’ (PDF). In P. Adriaans & J. Benthem (eds.), Handbook of the philosophy of information. Elsevier Science Publishers.
That’s an invalid comparison. That’s a mathematical operation that doesn’t involve information loss, and hence has nothing to do with this discussion.
The problem is when people decide that they believe / do not believe some proposition P, and then consider only the expected utility of the case where P is true / false.
Reducing a probability to a binary decision clearly loses information. You can’t argue with that.
No, I can’t. But I can argue that no reduction occurs.
To be fair, I see your point in the case of politicians or people who are otherwise indisposed to changing their minds: once they say they believe something there are costs to subsequently saying they don’t. That effectively makes it a binary distinction for them.
However, for people not in such situations, if I hear they believe X, that gives me new information about their internal state (namely, that they give X something like 55-85% chance of being the case). This doesn’t lose information. I think this comprises most uses of believe/disbelieve.
So I would argue that it’s not the believe/disbelieve distinction that is the problem; it’s the feedback loop that results from us not letting people change their minds that causes issues to be forced into yes/no terms, combined with the need for politicians/public figures to get their thought to fit into a soundbite. I don’t see how using other terms will ameliorate either of those problems.
Agree that this is widespread, and is faulty thinking. And my $.02, which you should feel free to ignore: your main post would be clearer, I think, if you focused more on the math of why this is so: find an example where different actions are appropriate based on the probability, and collapsing the probability into a 1 or 0 forces the choice of an inappropriate action; explain the example thoroughly; and only then name the concept with the labels believe/disbelieve. Hearing them right from the start put me on the wrong trail entirely.
I thought this was a post about language usage, but it’s actually a post about how not to do math with probabilities.
Right. I’m not talking about the effect of saying “I believe X” vs. “X”.
It probably would have been clearer to use an example.
I’m pretty sure both of us are right in this case. I agree that “casting things in the light of” believe/disbelieve can be unacceptably lossy. I was responding to you claiming it’s a “weird thing to do” to infer a 55-85 interval based on common uses of the word “believe”. Same word, but context seems to derive different concepts. AFAIK, people simply don’t tend to use the word in the 55-85 sense when they’re talking about “important” things (e.g., you don’t often hear things in the tone of, “I believe global warming is a serious problem, let me get back to you on that”).
In common usage, “think” and “believe” seem only to differ by degrees. For me, re-reading my above examples under s/believe/think/ seems to weaken the connoted confidence.
I thought I must be weird since I seem to have been the only one that completely didn’t understand the post initially. But perhaps I just lack this other usage entirely, or perhaps I still don’t agree. (See my response to Phil above: http://lesswrong.com/lw/10a/you_cant_believe_in_bayes/ssc)
Agree. I don’t like to give the impression that I’m more confident than I am.
Hmm, I think that lavalamp was probably thinking about the title, while I was thinking about the contents. The title is a rhetorical hook. You can believe in Bayes’ theorem in the ordinary sense of the word ‘believe’.