Alexander contrasts the imagined consequences of the expanded definition of “lying” becoming more widely accepted, to a world that uses the restricted definition:
...
But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
I disagree.
Appeals to consequences are extremely valid when it comes to which things are or are not good to do (in this case, defining “lying” in one way or another); having good consequences is what it means for a thing to be good to do.
The purpose of words is to communicate information (actually it’s social bonding and communicating information, but the former isn’t super relevant here); if defining a word in a particular way makes it less effective for communication, that is directly relevant to whether we should in fact define the word that way.
Words don’t have inherent meanings; they have only the ones we agree on. In spherical-cow world, definitions converge on the concept-bundles which are useful for communication. (E.g., it’s useful to communicate “water” or “lion” and less so to communicate “the glowing golden fruit which spontaneously appears whenever someone’s hungry” or “things with four corners, gray walls, and top hats”). Of course it’s more complicated in practice, but this is still an important aim when considering how to define terms (though in most communicative contexts, the most useful definition is ‘the one everybody else is already using’). If attaching a particular concept-bundle to a particular term has bad consequences, that’s evidence it’s not a useful concept-bundle to attach to that term. Not conclusive evidence—it could be useful for communication and have bad consequences—but evidence nonetheless.
As a tangent: you mention ‘accurately describing reality’ as a desirable property for definitions to have; IMO that is itself a consequence of choosing a concept-bundle which hews closely to natural features of reality (when there are natural features to hew to! It’s also useful to be able to talk about manmade concepts like ‘red’). And also of using definitions other people also know; if your ‘glast’ beautifully captures some natural category (uhhh let’s say stars) and everyone else understands ‘glast’ to mean ‘pickles’, then referring to a massive stellar object which radiates light and heat as a ‘glast’ does not describe reality accurately. More typically of course words have multiple overlapping definitions ~all of which are used by a decently-sized group of people, and all we can do is describe things accurately-according-to-some-particular-set-of-definitions and accept we’ll be misunderstood, but like, in the limit a definition which nobody shares cannot describe things to anyone.)
Or, to put all that in what might be shorter terms, words should describe reality to whom?
For any answer other than “myself,” it is necessary also to consider how the other person will understand the words in order to choose words which communicate those concepts which you mean. You have to consider the consequences of the words you say, because you’re saying the words in order to produce a specific consequence (your reader understanding reality more accurately).
Which brings me to my next point: Scott is arguing that defining lying more broadly will make people understand the world less accurately! If using the term in a broad sense makes people too angry to be rational, and using it in a narrow sense doesn’t do that, then people in the broad scenario will end up with a worse understanding of the world. (Personally I think rationalists in particular should simply decouple harder, but with people in general, someone who understands your words as an insult is—rationally—unlikely to also assess them as a truth claim).
On the object level Scott is wrong about whether jessicata’s usage is novel and IMO also about how lying should be defined (I think lying should include both saying things that are technically not false with intent to deceive and motivated self-deception in order to “honestly” report falsehoods; IMO using the narrow definition makes it easier for people to pretend the former are fundamentally dissimilar in a way which makes them fine. (Uh TBC I think rationalists are too negative on lies; these things are generally bad and should be socially punished but e.g. some rationalists think it’s wrong to ever tell a lie and I think normal social lying is basically fine. Actually I bet[1] the extreme anti-lie attitude is upstream of the increased concern re: false positives, come to think of it)) but on the meta level, consequences are an entirely reasonable thing to appeal to when deciding which actions we should take.
I disagree.
Appeals to consequences are extremely valid when it comes to which things are or are not good to do (in this case, defining “lying” in one way or another); having good consequences is what it means for a thing to be good to do.
The purpose of words is to communicate information (actually it’s social bonding and communicating information, but the former isn’t super relevant here); if defining a word in a particular way makes it less effective for communication, that is directly relevant to whether we should in fact define the word that way.
Words don’t have inherent meanings; they have only the ones we agree on. In spherical-cow world, definitions converge on the concept-bundles which are useful for communication. (E.g., it’s useful to communicate “water” or “lion” and less so to communicate “the glowing golden fruit which spontaneously appears whenever someone’s hungry” or “things with four corners, gray walls, and top hats”). Of course it’s more complicated in practice, but this is still an important aim when considering how to define terms (though in most communicative contexts, the most useful definition is ‘the one everybody else is already using’). If attaching a particular concept-bundle to a particular term has bad consequences, that’s evidence it’s not a useful concept-bundle to attach to that term. Not conclusive evidence—it could be useful for communication and have bad consequences—but evidence nonetheless.
As a tangent: you mention ‘accurately describing reality’ as a desirable property for definitions to have; IMO that is itself a consequence of choosing a concept-bundle which hews closely to natural features of reality (when there are natural features to hew to! It’s also useful to be able to talk about manmade concepts like ‘red’). And also of using definitions other people also know; if your ‘glast’ beautifully captures some natural category (uhhh let’s say stars) and everyone else understands ‘glast’ to mean ‘pickles’, then referring to a massive stellar object which radiates light and heat as a ‘glast’ does not describe reality accurately. More typically of course words have multiple overlapping definitions ~all of which are used by a decently-sized group of people, and all we can do is describe things accurately-according-to-some-particular-set-of-definitions and accept we’ll be misunderstood, but like, in the limit a definition which nobody shares cannot describe things to anyone.)
Or, to put all that in what might be shorter terms, words should describe reality to whom?
For any answer other than “myself,” it is necessary also to consider how the other person will understand the words in order to choose words which communicate those concepts which you mean. You have to consider the consequences of the words you say, because you’re saying the words in order to produce a specific consequence (your reader understanding reality more accurately).
Which brings me to my next point: Scott is arguing that defining lying more broadly will make people understand the world less accurately! If using the term in a broad sense makes people too angry to be rational, and using it in a narrow sense doesn’t do that, then people in the broad scenario will end up with a worse understanding of the world. (Personally I think rationalists in particular should simply decouple harder, but with people in general, someone who understands your words as an insult is—rationally—unlikely to also assess them as a truth claim).
On the object level Scott is wrong about whether jessicata’s usage is novel and IMO also about how lying should be defined (I think lying should include both saying things that are technically not false with intent to deceive and motivated self-deception in order to “honestly” report falsehoods; IMO using the narrow definition makes it easier for people to pretend the former are fundamentally dissimilar in a way which makes them fine. (Uh TBC I think rationalists are too negative on lies; these things are generally bad and should be socially punished but e.g. some rationalists think it’s wrong to ever tell a lie and I think normal social lying is basically fine. Actually I bet[1] the extreme anti-lie attitude is upstream of the increased concern re: false positives, come to think of it)) but on the meta level, consequences are an entirely reasonable thing to appeal to when deciding which actions we should take.
https://x.com/luminousalicorn/status/839542071547441152 ; and some of us were damn well using it as a figure of speech