Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.
To me you hit on a precise, valid downvote signal: “this post is effortful for me to falsify”. That would be helpful to writers like me to receive as a precise, labelled signal in order to optimise.
The disconnect I guess is that to me, all of my logic is robustly defensible and wholly laid out within the post. That’s precisely why I’m so keen on someone, anyone, being able to precisely state any logical inconsistency or area that lacks clarity.
If they were to do so, then I could expand on the area where our world-views/maps [https://www.lesswrong.com/w/map-and-territory] are too distinct, in pursuit of correcting one of our maps. To me this is the essence of rationality, and it’s jarring that I’m not able to get it on this platform.
Since I have to defer to an LLM for this: ChatGPT5 Pro gives me the following checklist [points 1 through 5, all other language is my own] to avoid being “not even wrong”, I’ve added my view on how strongly I’m doing against each item:
State one main claim in plain language.
Strongly achieved: “we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.”
Define key terms (what exactly do you mean by X?).
Moderately achieved — I could have formatted this differently, instead of containing it within the prose:
Contrarian — surely is a standard term, I illustrated it as “Being Peculiar” or exhibiting “visionary arrogance”
“High quality content” — stated as “[content that is] usually heavily upvoted [on the platform]”
“devise mechanisms” — stated both as “implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.” and self-referentially on this post as I describe the “comment following my guideline” → “me responding” → “readers casting their vote” dynamic
“the platform as a whole” — described LessWrong, and its central mission
Show your reasoning chain: premises → inference → conclusion.
Strongly achieved:
Premises: Spend hours putting together a post that contains a contrarian view that I’m proud of, but then receive no feedback besides a couple of downvotes.
Inference: Contrarian view is too loosely dismissed on the platform “The only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm… [but] people with societally normative viewpoints will disagree with you.”
Conclusion: “I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.” — this has extended to −16 karma across 11 votes, and still nobody has engaged to offer a logical inconsistency.
Cite evidence and say what would change your mind.
Strongly achieved:
Using my great wit,[this is use of hyperbole for humour] I self-referentially operationalised the post to illustrate my point. What would change my mind is if people actually upvoted me.
Quantify uncertainty (even roughly) and address the strongest counterargument.
Weakly achieved — I guess I was leaving this open for audience participation.
Uncertainty: I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument is that I should just write my post like an automaton,[this is also use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the humour and illustrative devices that come naturally to me.
The problem with this is that writing like that isn’t fun to me and doesn’t come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe my post wholly consists of robust logic and satisfies this “avoid being ‘not even wrong’” checklist. If I had provided this checklist at the top of my post, would it have made my post easier to parse and thus well-received by the community? Or am I still missing something?
Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.
I have not voted either way on the current post.
Thank you for providing your evaluative criteria.
To me you hit on a precise, valid downvote signal: “this post is effortful for me to falsify”. That would be helpful to writers like me to receive as a precise, labelled signal in order to optimise.
The disconnect I guess is that to me, all of my logic is robustly defensible and wholly laid out within the post. That’s precisely why I’m so keen on someone, anyone, being able to precisely state any logical inconsistency or area that lacks clarity.
If they were to do so, then I could expand on the area where our world-views/maps [https://www.lesswrong.com/w/map-and-territory] are too distinct, in pursuit of correcting one of our maps. To me this is the essence of rationality, and it’s jarring that I’m not able to get it on this platform.
Since I have to defer to an LLM for this: ChatGPT5 Pro gives me the following checklist [points 1 through 5, all other language is my own] to avoid being “not even wrong”, I’ve added my view on how strongly I’m doing against each item:
State one main claim in plain language.
Strongly achieved: “we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.”
Define key terms (what exactly do you mean by X?).
Moderately achieved — I could have formatted this differently, instead of containing it within the prose:
Contrarian — surely is a standard term, I illustrated it as “Being Peculiar” or exhibiting “visionary arrogance”
“High quality content” — stated as “[content that is] usually heavily upvoted [on the platform]”
“devise mechanisms” — stated both as “implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.” and self-referentially on this post as I describe the “comment following my guideline” → “me responding” → “readers casting their vote” dynamic
“the platform as a whole” — described LessWrong, and its central mission
Show your reasoning chain: premises → inference → conclusion.
Strongly achieved:
Premises: Spend hours putting together a post that contains a contrarian view that I’m proud of, but then receive no feedback besides a couple of downvotes.
Inference: Contrarian view is too loosely dismissed on the platform “The only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm… [but] people with societally normative viewpoints will disagree with you.”
Conclusion: “I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.” — this has extended to −16 karma across 11 votes, and still nobody has engaged to offer a logical inconsistency.
Cite evidence and say what would change your mind.
Strongly achieved:
Using my great wit,[this is use of hyperbole for humour] I self-referentially operationalised the post to illustrate my point. What would change my mind is if people actually upvoted me.
Quantify uncertainty (even roughly) and address the strongest counterargument.
Weakly achieved — I guess I was leaving this open for audience participation.
Uncertainty: I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument is that I should just write my post like an automaton,[this is also use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the humour and illustrative devices that come naturally to me.
The problem with this is that writing like that isn’t fun to me and doesn’t come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe my post wholly consists of robust logic and satisfies this “avoid being ‘not even wrong’” checklist. If I had provided this checklist at the top of my post, would it have made my post easier to parse and thus well-received by the community? Or am I still missing something?