I want to point out that there are lots of situations where English speakers fluently use words that don’t have clear dividing lines between their applicability and their inapplicability—it depends on context and details. “The music is loud.” What if I’m deaf or far away or like to be able to feel the bass line in my bones? That doesn’t make the sentence impermissible or even hard to understand and I don’t need the speaker to produce a decibel value. “If you go to high altitudes, the air is thinner and you might get dizzy.” How high? If I’m dizzy in Denver and the speaker thinks you shouldn’t need to adjust your behavior until there are Sherpas about and meanwhile Batman can breathe in space, that doesn’t make the sentence false, let alone useless. “It’s cold, bring a jacket.” Oh you sweet summer child, I’m good in short sleeves, thanks, I just don’t know what you meant by “cold” -
There are lots of conversational purposes for which you don’t in fact have to know where someone draws the line. You don’t even need to be able to agree on every point’s ordering in the spectrum (“it’s colder today” “that’s just windchill”). The words gesture in a direction. I think “chemicals” does too, and you know what direction because you came up with “unprocessed” as a gloss on “low in chemicals”. If someone doesn’t buy that brand of dip before it’s full of chemicals, in your innocent confusion I suggest you glance at the ingredients list for a guess at the threshold in question.
Suggesting search engine terms might be helpful. I don’t think I’d ever find “you’re going to confuse people” helpful—either I already know that I’m not being very precisely expressive and these are all the words I have, or, if that’s not the case, “could you elaborate/rephrase that” would be better. I didn’t feel exasperated by this comment but might by a long chain of them on this branch.
Some people are in fact responsive to “that’s a slur; the preferred term is X”, especially if X isn’t a barbarous use of language, if they were using the slur to encompass the whole group and got caught by a euphemism treadmill or just pick up their vocabulary from sources unsympathetic to Xes. And you don’t have to reject an offered word for being a syllable longer if you want to make that tradeoff. I think this is a case of Postel’s law, or should be.
I wish to clarify that I’m not asserting that everyone knows exactly what things are “chemicals” and what things are not. There’s room for disagreement, for one thing, and the disagreements might turn on all kinds of little points about where a substance came from and even why it was added to the food. But I do think that given two lists of ingredients for different brands of, say, packaged guacamole, you could distinguish “few to no chemicals” from “lots of chemicals”. That there isn’t a strict, look-up-able boundary of necessary and sufficient conditions that fits in a “coherent model” doesn’t mean it’s not useful to gesture at for some purposes, sort of like music genres. I don’t have a coherent model of music genres and I couldn’t elaborate much on what I mean if I call a song “poppy” or “jazzy” but that doesn’t mean it’s not a statement I might reasonably utter.
Yeah, I don’t fully endorse the linked Tumblr post; in particular there’s certainly ways to resolve these conflicts that aren’t “abdicate the terminology yourself”. But some of it is highly relevant and well said.
Extensionally, “chemicals” is food coloring that doesn’t come straight out of a whole food, disodium edta, ammonia, peroxide, acetone, sulfur dioxide, aspartame, sodium aluminosilicate, tetrasodium pyrophosphate, sodium sorbate, methylchloroisothiazolinone....
And not: apple juice, water, table salt, vodka, flour, sugar, milk...
A thing doesn’t have to be a natural category for people to want to talk about it and have a legitimate interest in talking about it.
I disagree with your second point and think you’re missing mine. If you don’t want to talk to someone, don’t talk to them. You don’t have to be cruel, and your desire to be cruel doesn’t make it reasonable.
Relevant Tumblr post (not mine)
I spent a lot of this post interested in the content, mostly the rich examples, but confused about where the entire post was pointing because I didn’t realize until toward the end that you meant something totally different than me by “explicit communication”—you seem to mean something like “communication with a lot of moving parts and techniques and cringey NVC goop specifically addressing certain subtopics”, and I would have expected the phrase to mean “communication that is clear, not sarcastic, and not very reliant on context, tone, or other neurotypical stuff”. Some of your examples fit this definition well enough and you wrote as though your categorization didn’t need to be… uh, explicit… for me to persist in my misunderstanding for a long while.
I do think there are intermediate stages of misery-pit-ness.
The target audience was “people like the people I’ve talked to about this before who find this model/framing helpful to them in their efforts to set and enforce boundaries before, not after, they are harmed by taking on too much responsibility for other people”. I don’t have any really useful advice for misery pits themselves that isn’t implicitly in the post. The second conversation doesn’t come free with the first because it requires more content which I don’t happen to have.
I’ve added a content warning but I noticed as I was composing it I wasn’t really sure what to say, so I’m low-confidence that it’s anything like what you had in mind.
The second suggestion seems to me inapplicable—it’s a definition post, not a strategy post. I don’t think you need to be in any specific state to potentially want vocabulary.
What disclaimers exactly? I can’t diagnose misery-pit-hood of article readers as a group, so I can’t say “if you’re reading this you aren’t a misery pit”. I suppose I could… say that I’m not trying to get anyone to commit suicide...? I just don’t understand how this problem could be solved by disclaimers.
Being gripped by destructive rage when your friends succeed sounds like not a central case of the thing I was trying to describe.
I’m interested in this topic but this post strikes me as vague and meandering, which I guess might be an artifact of it being originally intended for Medium (maybe Medium audiences don’t know what social technology is at all?) I’d like to see more detailed and example-heavy posts about social technology on LW.
I freeze butter; cutting it up from frozen is hard.
Which approach makes sense depends on the extent to which it will come up again and how much that’s your problem. (And potentially, in the case of Googling things, how good your retention is.)
So, the Love Languages guy has a book about “Apology Languages”, which I thought of while reading this post because all of your apology scripts sounded awful to me and I would much rather hear something other than that. Since he wants people to buy his books he doesn’t seem to have a summary all on one page of all five but there are various blog posts condensing the idea and some official-website excerpts/blurbs if you Google it. Like with love languages, there’s some supposed backing to the model but it’s a convenient shorthand for usable concepts even if the backing turns out to be dodgy.
I feel like there’s a persistent assumption that not even a well aligned AI will include human choices as a step in decisions like these. Maybe it will just be a checkbox in the overall puppeteering of circumstances that the AI carries out, so keen its prediction, but for it to go completely unmentioned in any of the hypotheticals seems like a glaring omission to me.