10 is vague, and lacks examples. (Is it the Sorites paradox?)
11 is great. (Though it does raise the question—if you can only see upvotes minus downvotes, how do you know whether a score of 1 indicates no one cared, or everyone cared and were split both ways?)
That’s fair. For a more concrete example, see the immortal Scott Alexander’s recent post “Against Lie Inflation” (itself a reply to discussion with Jessica Taylor on her Less Wrong post “The AI Timelines Scam”). Alexander argues:
The word “lie” is useful because some statements are lies and others aren’t. [...] The rebranding of lying is basically a parasitic process, exploiting the trust we have in a functioning piece of language until it’s lost all meaning[.]
I read Alexander as making essentially the same point as “10.” in the grandparent, with G = “honest reports of unconsciously biased beliefs (about AI timelines)” and H = “lying”.
10 is vague, and lacks examples. (Is it the Sorites paradox?)
11 is great. (Though it does raise the question—if you can only see upvotes minus downvotes, how do you know whether a score of 1 indicates no one cared, or everyone cared and were split both ways?)
That’s fair. For a more concrete example, see the immortal Scott Alexander’s recent post “Against Lie Inflation” (itself a reply to discussion with Jessica Taylor on her Less Wrong post “The AI Timelines Scam”). Alexander argues:
I read Alexander as making essentially the same point as “10.” in the grandparent, with G = “honest reports of unconsciously biased beliefs (about AI timelines)” and H = “lying”.
Note that it’s a central example if you’re doing agent-based modeling, as Michael points out.