It doesn’t seem generally true that communication requires delicate maintenance. Liars have existed for thousands of years, and languages have diverged and evolved, and yet we still are able to communicate straightforwardly the vast majority of the time! Like you said, lying loses its effectiveness the more it is used, and so there’s a counter-pressure which automatically prevents it from taking over.
Perhaps this analogy will help us talk about things more clearly. We can think of a communication-sphere as being a region with a temperature. Just as a region must be at equilibrium in order to have a temperature at all, so a communication-sphere must have enough interaction that there’s a shared sense of meaning and understanding. The higher the temperature, the more uncertainty there is over what is meant in an average interaction. Normal society operates around room temperature, which is quite far from absolute zero. But machinery and computers and life are all able to operate functionally here even so! On Less Wrong, the temperature is around liquid Nitrogen, quite colder, but still not particularly close to absolute zero. People are a lot more careful with the reasoning and meanings here, but various ambiguities of language are still present, as well as some external entropy introduced by deceptive processes. It takes some effort to maintain this temperature, but not so much that it can’t exist as a public website. It seems to me like you are advocating that we (but who exactly is unclear) try to bring things down to liquid Helium temperatures, maybe because unusual levels of cooperation like superfluidity become possible. And it is only around here where this temperature becomes fragile, and requires delicate maintenance.
It doesn’t seem generally true that communication requires delicate maintenance. Liars have existed for thousands of years, and languages have diverged and evolved, and yet we still are able to communicate straightforwardly the vast majority of the time! Like you said, lying loses its effectiveness the more it is used, and so there’s a counter-pressure which automatically prevents it from taking over.
It seems to me that there are numerous instances, from the Challenger o-rings to Iraqi WMDs to Lysenkoism, where telling lies has become normalized. Usually followed shortly by catastrophe. You could argue (and I would agree) that such catastrophes are simply part of the “automatic counter-pressure” that allows language to continue to exist. But there’s an understandable desire to find other mechanisms that don’t require as much suffering and death.
I think Adele’s framework is slightly better than Zack’s here, but I perhaps agree with you hthat I struggle to use either to describe Lysenkoism, for example, or the expressing the belief that a RBMK reactors are infallible.
Simpler concepts like wish-fulfillment and Yarvin’s Observation (below) seem better at explaining virtual signaling behavior and impression management to me.
″...in many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army.”
It also depends on what you mean by “liars”. A lot of political speech where the speech is itself stating lies seems to be in-group signaling.
For example posting that a certain politician lost the election due to widespread voting fraud isn’t really saying you believe there was fraud. It is stating you support that side. The message header itself essentially is the message even if the message body is lies.
It doesn’t seem generally true that communication requires delicate maintenance. Liars have existed for thousands of years, and languages have diverged and evolved, and yet we still are able to communicate straightforwardly the vast majority of the time! Like you said, lying loses its effectiveness the more it is used, and so there’s a counter-pressure which automatically prevents it from taking over.
Perhaps this analogy will help us talk about things more clearly. We can think of a communication-sphere as being a region with a temperature. Just as a region must be at equilibrium in order to have a temperature at all, so a communication-sphere must have enough interaction that there’s a shared sense of meaning and understanding. The higher the temperature, the more uncertainty there is over what is meant in an average interaction. Normal society operates around room temperature, which is quite far from absolute zero. But machinery and computers and life are all able to operate functionally here even so! On Less Wrong, the temperature is around liquid Nitrogen, quite colder, but still not particularly close to absolute zero. People are a lot more careful with the reasoning and meanings here, but various ambiguities of language are still present, as well as some external entropy introduced by deceptive processes. It takes some effort to maintain this temperature, but not so much that it can’t exist as a public website. It seems to me like you are advocating that we (but who exactly is unclear) try to bring things down to liquid Helium temperatures, maybe because unusual levels of cooperation like superfluidity become possible. And it is only around here where this temperature becomes fragile, and requires delicate maintenance.
It seems to me that there are numerous instances, from the Challenger o-rings to Iraqi WMDs to Lysenkoism, where telling lies has become normalized. Usually followed shortly by catastrophe. You could argue (and I would agree) that such catastrophes are simply part of the “automatic counter-pressure” that allows language to continue to exist. But there’s an understandable desire to find other mechanisms that don’t require as much suffering and death.
Not all lies are the same.
I think Adele’s framework is slightly better than Zack’s here, but I perhaps agree with you hthat I struggle to use either to describe Lysenkoism, for example, or the expressing the belief that a RBMK reactors are infallible.
Simpler concepts like wish-fulfillment and Yarvin’s Observation (below) seem better at explaining virtual signaling behavior and impression management to me.
It also depends on what you mean by “liars”. A lot of political speech where the speech is itself stating lies seems to be in-group signaling.
For example posting that a certain politician lost the election due to widespread voting fraud isn’t really saying you believe there was fraud. It is stating you support that side. The message header itself essentially is the message even if the message body is lies.