I’m not sure I agree. Expecting people to judge stated claims and ignore implicature all the time is unreasonable, sure. But expecting them to judge stated claims over implicature when the stated claim is about empirical facts strikes me as plenty reasonable.
...or that was my opinion until now, anyway. This bit about the brain not actually distinguishing the two has me questioning it. I still don’t think that it’s okay to conflate them, but if the tendency to do so is hardwired, then it doesn’t represent willful stupidity or intellectual dishonesty.
It is, however, still a problem, and I don’t think it’s one that can be blamed on the speaker; as Gunnar points out elsethread, it’s hard to explicitly rule out implicatures that you yourself did not think of. It’s also hard to have a discussion when you have to preface statements with disclaimers.
I should add that I am talking about relatively neutral statements here. If I may steal an example from yvain, if you say “The ultra-rich, who control the majority of our planet’s wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve,” you pretty much lose the right to complain. For contrast, if you say “90% of the planet’s wealth is held by the upper 1%,” and your discussion partner asks you why you support the monster Stalin, I think you’re on solid ground asking them WTF.
...or again, so I thought. If the brain really doesn’t distinguish between the neutral version of that statement and the listener’s belief that people making it must be Communists, then the comparison is inevitable and I am boned.
This bit about the brain not actually distinguishing the two has me questioning it.
It clearly an overstatement. People are very well able to distinguish them—we are doing so right here. Perhaps what people are actually doing (I have not seen the Clark&Clark source to know what concrete observations they are discussing) is considering the implications to have been intended by the speaker as much as the explicit assertions. Well, duh, as the saying is.
Implicatures aren’t some weird thing that the poor confused mehums do that the oppressed slans are forced to put up with. You don’t say things when they are clear without being said, because it’s a waste of time. It’s a compression algorithm. As with any compression algorithm, the more you compress things, the more vulnerable the message is to errors, and you have a trade-off between the two.
This, btw, is my interpretation of the Ask/Guess cultural division. These are different compression algorithms, that leave out different stuff. Mixing compression algorithms is generally a bad idea: too much stuff gets left out if both are applied.
People are very well able to distinguish them—we are doing so right here.
Are we? We’re discussing the distinction, sure, but is each of us distinguishing the other’s statements about implicature from the other’s implications about implicature? Did I say everything you think I said? Did you say everything I think you said?
If I read this thread, then attempt to write down a list of significant statements you made from memory, and then compare that list to your actual text; will it contain things you did not say? Will it also contain things that I thought followed from what you said, but that you neither said nor meant?
My understanding of the original quote is that it will. I found that surprising, enlightening, and scary.
We’re discussing the distinction, sure, but is each of us distinguishing the other’s statements about implicature from the other’s implications about implicature?
No, because we do not need to. We are responding to what we perceive the others’ meanings to be, regardless of how explicitly they were expressed. Only if we are are uncertain of an implication, or if one person perceives an implication that another did not intend, do we need to raise the issue.
If I read this thread, then attempt to write down a list of significant statements you made from memory, and then compare that list to your actual text; will it contain things you did not say?
Yes. This will still be the case if you do not do this from memory, but write out a paraphrase with the original text in front of you.
Will it also contain things that I thought followed from what you said, but that you neither said nor meant?
It may well do that as well. Communication is fallible. That is why it has to be a two-way process: fire and forget doesn’t work. Hence also such devices as the ideological Turing test: formulating someone else’s views on a subject in a way that they agree is accurate.
ETA: The university library has the Clark & Clark book. Kahnemann and Tversky don’t give a page reference, but chapters 2 and 3 of C&C discuss implicatures. What I gather from it is that yes, people make them. In experiments, their memories of a text are influenced in various ways by the background knowledge that they bring to bear. Incongruous details are more often forgotten, while congruous but absent details are added in retelling. The experiments cited in Clark & Clark measure things like how long certain linguistic comprehension tasks take and what errors are made. These are compared with the predictions of various models of how the brain is processing things.
Sounds like Bayesian updating to me, with the evidence received (the text) not being so strong as to screen off the priors. But unless you are specifically attending to it (as you might well be doing, e.g. when examining a witness at a trial) all you are aware of is the result.
I found that surprising, enlightening, and scary.
Welcome to consciousness of abstraction! Your brain does not distinguish true beliefs from false beliefs!
For contrast, if you say “90% of the planet’s wealth is held by the upper 1%,” and your discussion partner asks you why you support the monster Stalin, I think you’re on solid ground asking them WTF.
Maybe. It could have been your discussion partner’s experience that everyone who brings up the 90% thing has, in fact, been a communist. If that’s been their experience, then based on the knowledge that they have, that can be a reasonable question to ask. Compare with the claim “Marx wrote that [whatever]”; even though this might be a neutral factual claim in principle, in practice anyone who brings that up in a discussion is much more likely to be a Marxist than someone who doesn’t.
I’m not sure I agree. Expecting people to judge stated claims and ignore implicature all the time is unreasonable, sure. But expecting them to judge stated claims over implicature when the stated claim is about empirical facts strikes me as plenty reasonable.
...or that was my opinion until now, anyway. This bit about the brain not actually distinguishing the two has me questioning it. I still don’t think that it’s okay to conflate them, but if the tendency to do so is hardwired, then it doesn’t represent willful stupidity or intellectual dishonesty.
It is, however, still a problem, and I don’t think it’s one that can be blamed on the speaker; as Gunnar points out elsethread, it’s hard to explicitly rule out implicatures that you yourself did not think of. It’s also hard to have a discussion when you have to preface statements with disclaimers.
I should add that I am talking about relatively neutral statements here. If I may steal an example from yvain, if you say “The ultra-rich, who control the majority of our planet’s wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve,” you pretty much lose the right to complain. For contrast, if you say “90% of the planet’s wealth is held by the upper 1%,” and your discussion partner asks you why you support the monster Stalin, I think you’re on solid ground asking them WTF.
...or again, so I thought. If the brain really doesn’t distinguish between the neutral version of that statement and the listener’s belief that people making it must be Communists, then the comparison is inevitable and I am boned.
It clearly an overstatement. People are very well able to distinguish them—we are doing so right here. Perhaps what people are actually doing (I have not seen the Clark&Clark source to know what concrete observations they are discussing) is considering the implications to have been intended by the speaker as much as the explicit assertions. Well, duh, as the saying is.
Implicatures aren’t some weird thing that the poor confused mehums do that the oppressed slans are forced to put up with. You don’t say things when they are clear without being said, because it’s a waste of time. It’s a compression algorithm. As with any compression algorithm, the more you compress things, the more vulnerable the message is to errors, and you have a trade-off between the two.
This, btw, is my interpretation of the Ask/Guess cultural division. These are different compression algorithms, that leave out different stuff. Mixing compression algorithms is generally a bad idea: too much stuff gets left out if both are applied.
Are we? We’re discussing the distinction, sure, but is each of us distinguishing the other’s statements about implicature from the other’s implications about implicature? Did I say everything you think I said? Did you say everything I think you said?
If I read this thread, then attempt to write down a list of significant statements you made from memory, and then compare that list to your actual text; will it contain things you did not say? Will it also contain things that I thought followed from what you said, but that you neither said nor meant?
My understanding of the original quote is that it will. I found that surprising, enlightening, and scary.
No, because we do not need to. We are responding to what we perceive the others’ meanings to be, regardless of how explicitly they were expressed. Only if we are are uncertain of an implication, or if one person perceives an implication that another did not intend, do we need to raise the issue.
Yes. This will still be the case if you do not do this from memory, but write out a paraphrase with the original text in front of you.
It may well do that as well. Communication is fallible. That is why it has to be a two-way process: fire and forget doesn’t work. Hence also such devices as the ideological Turing test: formulating someone else’s views on a subject in a way that they agree is accurate.
ETA: The university library has the Clark & Clark book. Kahnemann and Tversky don’t give a page reference, but chapters 2 and 3 of C&C discuss implicatures. What I gather from it is that yes, people make them. In experiments, their memories of a text are influenced in various ways by the background knowledge that they bring to bear. Incongruous details are more often forgotten, while congruous but absent details are added in retelling. The experiments cited in Clark & Clark measure things like how long certain linguistic comprehension tasks take and what errors are made. These are compared with the predictions of various models of how the brain is processing things.
Sounds like Bayesian updating to me, with the evidence received (the text) not being so strong as to screen off the priors. But unless you are specifically attending to it (as you might well be doing, e.g. when examining a witness at a trial) all you are aware of is the result.
Welcome to consciousness of abstraction! Your brain does not distinguish true beliefs from false beliefs!
I don’t think I have any more to say in this subthread, but I wanted to thank you for looking this up. Thanks.
Maybe. It could have been your discussion partner’s experience that everyone who brings up the 90% thing has, in fact, been a communist. If that’s been their experience, then based on the knowledge that they have, that can be a reasonable question to ask. Compare with the claim “Marx wrote that [whatever]”; even though this might be a neutral factual claim in principle, in practice anyone who brings that up in a discussion is much more likely to be a Marxist than someone who doesn’t.