Suppose passive_fist translates “unlikely” as 2% and Locaha translates “unlikely” as 12%. This could mean either of two things (or some combination of them). (1) passive_fist applies the word “unlikely” to things that feel more unlikely, corresponding to lower probability estimates when forced to quantify. (2) Both actually think much the same about the event in question, as shown by their use of the same word, but they have quite different processes (at least one of them very inaccurate) for translating those thoughts into numbers.
In case 1, quantifying helps to clarify that the two people involved mean quite different things by “unlikely”. There may be a lot of fuzziness about the numbers, but once we have them we can see that passive_fist will likely be much more surprised if something s/he calls “unlikely” happens, than Locaha will be if something s/he calls “unlikely” happens.
In case 2, quantifying just adds confusion and error.
I would expect that (especially for analytical quantitative types like most of LW’s readership) the truth is something like this. We think, mostly, in fuzzy terms that don’t correspond directly either to numbers or to words. There will be some region of subjective likelihood-feeling space that corresponds (e.g.) to the number 2% or 12%. There will be some region that corresponds (e.g.) to the word “unlikely”. These correspondences will all work differently for different people, but (a) there will generally be more consistency between one person’s “10%” and another’s than between one person’s “unlikely” and another’s, and (b) the finer-grained information you get by asking for probability estimates does have some value, provided you’ve wit enough not to imagine that everything expressed numerically is known accurately.
Suppose passive_fist translates “unlikely” as 2% and Locaha translates “unlikely” as 12%. This could mean either of two things (or some combination of them). (1) passive_fist applies the word “unlikely” to things that feel more unlikely, corresponding to lower probability estimates when forced to quantify. (2) Both actually think much the same about the event in question, as shown by their use of the same word, but they have quite different processes (at least one of them very inaccurate) for translating those thoughts into numbers.
In case 1, quantifying helps to clarify that the two people involved mean quite different things by “unlikely”. There may be a lot of fuzziness about the numbers, but once we have them we can see that passive_fist will likely be much more surprised if something s/he calls “unlikely” happens, than Locaha will be if something s/he calls “unlikely” happens.
In case 2, quantifying just adds confusion and error.
I would expect that (especially for analytical quantitative types like most of LW’s readership) the truth is something like this. We think, mostly, in fuzzy terms that don’t correspond directly either to numbers or to words. There will be some region of subjective likelihood-feeling space that corresponds (e.g.) to the number 2% or 12%. There will be some region that corresponds (e.g.) to the word “unlikely”. These correspondences will all work differently for different people, but (a) there will generally be more consistency between one person’s “10%” and another’s than between one person’s “unlikely” and another’s, and (b) the finer-grained information you get by asking for probability estimates does have some value, provided you’ve wit enough not to imagine that everything expressed numerically is known accurately.
[EDITED to fix formatting screwup.]
Plus, some people here use stuff like PredictionBook to check whether the intuition they call “10%” is actually correct 10% of the time.