I’d say that it doesn’t carve reality at the same places as my understanding. I neither upvoted nor downvoted the post, but had to consciously remember that I have that option at all.
I think that language usage can be represented as vector, in basis of two modes:
“The Fiat”: words really have meanings, and goal of communication is to transmit information (including requests, promises, etc!),
“Non-Fiat”: you simply attempt to say a phrase that makes other people do something that furthers your goal. Like identifying with a social group (see Belief as Attire) or non-genuine promises.
(Note 1: if someone asked me what mode I commonly use, I would think. Think hard.)
(Note 2: I’ve found a whole tag about motivations which produce words—https://www.lesswrong.com/tag/simulacrum-levels! Had lost it for certain time before writing this comment.)
In life, I try to communicate less hyperboles and replace them with non-verbal signs, which do not carry implication of either “the most beautiful” or “more beautiful than everyone around”.
Speaking of next steps, I’d love to see a transformer that was trained to manipulate those states (given target state and interactor’s tokens, would emit its own tokens for interleaving)! I believe this would look even cooler, and may be useful in detecting if AI starts to manipulate someone.