Visionary arrogance and a criticism of LessWrong voting
Posted (2025/09/12 3PM PST)
Update (2025/09/13 3AM PST): Clarified Comment Guideline notice. Added “Operationalising my recommendation” section. Added “What this will look like if my criticism is valid” section. Added Appendix with snapshot of conversation thread.
Update (2025/09/13 10AM PST): Replaced Comment Guideline with a Voting Guideline and Comment Request to be in accordance with LessWrong rules. Added “What this post is, and why you should care” section up front.
Final Update (2025/09/13 3PM PST): Minor grammatical changes for flow. Added context to the first comment exchange [Footnote 8]. Added a prediction to [Footnote 7] — I think no more updates are required. Added “Notice to new/returning readers”.
*** Notice to new/returning readers: This post has undergone a few updates (as above) but I believe is now in its final form. You are arriving at a good time — the storm has dissipated and the post should be more accessible than earlier iterations. ***
*** Voting Guideline: You should freely vote and react according to your views and LessWrong norms — I do not want to infringe upon this. ***
*** Comment Request: However, please allow me to make a request: if you do downvote this post and are willing to make that transparent, it would help me to operationalise the recommendation I put across in this post if you add a Reaction or a 30+ character comment prepended with “Downvote note:” on what to improve. ***
What this post is, and why you should care
A recommendation / feature request for the LessWrong platform
I state “we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.”
I designed the post to self-referentially portray my recommendation in action.
Key terms used
Contrarian — I feel this is a standard term, I state it as someone using a “non-normative claim” and illustrate it as “Being Peculiar” or exhibiting “visionary arrogance”
“High quality content” — I state as “[content that is] usually heavily upvoted [on the platform]”
“Devise mechanisms” — I state as “implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.”
Self-referential version: I describe a “comment according to my Comment Request” → “I respond” → “readers cast their vote” dynamic.
“The platform as a whole” — I describe LessWrong, and its central mission
Self-referential version: This post is my platform.
Reasoning chain
Premises: Contrarian authors spend hours putting together a post that contains a view that they may feel is well-reasoned, but then receive no feedback besides a couple of downvotes.
Self-referential version: I have determined a way to operationalise my recommendation upfront. I have suggested an opportunity to improve the LessWrong team’s model of supporting contrarian views with stability.
Inference: Contrarian views are too loosely dismissed on the platform “The only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm… [but] people with societally normative viewpoints will disagree with you.”
Self-referential version: Unfortunately, there are mechanisms that will suppress my contrarian views.
Conclusion: “I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.”
Self-referential version: I have shown how a person confidently expressing non-normative ideas [me] is easy to dismiss, despite this being a necessary condition of being a visionary free-thinker.
Why I’m right, and what would change my mind
Using my great wit,[1] I self-referentially operationalised this post to illustrate my point.
What would change my mind is if people actually upvoted me.
What I’m uncertain about, and a steel-man counterargument
I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument is that I should just write my post like an automaton,[2] instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
The problem with this is that writing like that isn’t fun to me and doesn’t come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe that all of my logic is robustly defensible and wholly laid out within the post.
Operationalising my recommendation
On this post[3] I will intentionally try to illustrate how I would see my recommendation playing out:
Contrarian post is made and receives downvote, with Downvoter comment providing justification.
Contrarian should agree or disagree with the Downvoter, and cast their vote accordingly.
Other readers can cast their votes on the comments of the Contrarian and the Downvoter — here we have rationally operationalised truth-seeking by transparently surfacing a weakness of the post via the Downvoters comment, hearing the Contrarian’s yielding or defence, and by being able to rate both sides.
Prior context
Automatic Rate Limiting on LessWrong suggests a stable grid dynamic:
Low-quality consensus posts/comments (Usually somewhat upvoted, or heavily upvoted when they’re funny or particularly emotionally resonant) | High-quality consensus posts/comments (Usually pretty upvoted) |
Low-quality contrarian posts/comments (Usually somewhat downvoted, or heavily downvoted if they’re rude) | High-quality contrarian posts/comments (Usually heavily upvoted) |
The crux of my criticism aligns with this grid: we agree that high-quality contrarian posts/comments are the most valuable — they are usually heavily upvoted. It follows that we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.[4]
The platform as a whole
I love the LessWrong platform. I think that it attracts an incredibly intelligent, well-read audience with a diverse range of perspectives.
I feel that the technical implementation of the site is exceptional — a daily, curated news-cycle, with emergent high quality posts for the homepage; posts are easily readable, and conversational threads are natural, well-moderated, and easy to parse; the Reaction system feels nicely implemented in a way where it is available to opt in to, but isn’t overpowering for folks that just want textual discourse.
As a lurker and engager of a variety of posts on the platform, I think that my own writing is decently LessWrong-y — I lay out my thoughts step-by-step and avoid logical inconsistencies or incongruously big leaps. I’m decently well-versed in the rationality literature and cite core works that my ideas build upon.
My criticism
Sometimes I spend hours putting together a post that I’m proud of, but then receive no feedback besides a couple of downvotes.
This is incredibly frustrating for two reasons:
Firstly — it seems that once voting on a post flips negative, the site won’t surface it anywhere near as prominently to readers.
Much more than that — receiving a negative vote with no qualitative feedback is solely disruptive to the author: there’s no signal for how to build on your ideas or frame them differently.
I’m decently skilled at channelling my attention well and good at tuning out noise, but I’d be lying if I were to say that I don’t find it off-putting when this happens.
I have written about a concept of “the tension between truth-seeking and societal harmony”. Authentically expressing what you feel to be true creates tension if it doesn’t match societal norms. This is a shame because I’m very pro- free-speech: I think that the world is made a better place by allowing more people to express their ideas about what is true.[5]
This is not being enabled on LessWrong: a down-voting agent can effectively silence my voice just because they disagree with me. On individual comments, “overall karma” and “agreement karma” are distinct, but for a post only a single voting metric exists.
Is this not directly opposed to LessWrong’s central mission?
LessWrong is an online forum/community that was founded with the purpose of perfecting the art of human[6] rationality.
Visionary arrogance
Now we get to some self-awareness.
Jeff Bezos, a visionary free-thinker, through Amazon advanced a cultural model with 16 core Leadership Principles (LPs) — but also some principles not codified as LPs. One of these is Amazon’s doc-writing culture, and another is the idea of embracing Being Peculiar.
From the “Being Peculiar” article linked:
“What kind of person owns being peculiar?” My answer, someone who is more concerned about their own legacy, than what others think of him. While this is admirable, it is also quite peculiar by American society’s standards.
This is hitting at the core of my argument — the only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm. Put another way: people with societally normative viewpoints will disagree with you.
I’m not equating myself to Jeff Bezos. What I’m saying is that if I were Jeff Bezos, perhaps my really insightful ideas would be buried by the current site setup.
This dynamic is especially true when someone expresses a lot of confidence in their non-normative ideas. This makes sense, because it pattern-matches as delusion which itself is off-putting. You could argue that maybe we should wait for someone to acquire a lot of status, e.g become a billionaire, and only then can we give them a platform for confidently expressing non-normative ideas. I’m not sure that I agree that this would be the best form of society though.
My recommendation
I think the site could be improved by implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.
These reasons should be held to account: people should be able to see and downvote them if they are flawed.
I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.
What this will look like if my criticism is valid
In this post I have presented a well-reasoned contrarian viewpoint:
Determined a way to operationalise my recommendation upfront
Suggested an opportunity to improve the LessWrong team’s model of supporting contrarian views with stability
Described mechanisms that suppress contrarian views
Described how a person confidently expressing non-normative ideas is easy to dismiss, despite this being a necessary condition of being a visionary free-thinker
This post and my comments are significantly downvoted, nobody has engaged with these four core points and the damage is done: my contrarian view is suppressed, my future posts and comments hold less weight, and I’m disillusioned by the capacity of folks to engage with contrarian viewpoints in good faith.[7]
Appendix
Snapshot of first comment exchange, taken 2025/09/13
Post at −15 overall karma [10 votes] —
[Presumed][8] Downvoter (+4 overall karma [4 votes], +3 agreement karma [3 votes]):
You may be interested in a very similar discussion from several months ago: When you downvote, explain why.
Contrarian (-7 overall karma [3 votes], −10 agreement karma [3 votes]):
Are you implying that this post (“Visionary arrogance and a criticism of LessWrong voting”) should be downvoted because it reaches the same conclusion as a 7-month old post which lacks half of the framing (“visionary arrogance”) that I use to describe voting behaviour motivations?
If so that sounds logically flawed to me, and so I both disagree and have downvoted you.
If you were not implying that and simply offering some additional context for me to refer to (the discussion in the comments is valuable), then I apologise and will revert my downvoting.
- ^
This is use of hyperbole for [attempted] comic effect.
- ^
This is also use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.
- ^
I could alternatively use the word platform.
- ^
My “criticism” is that this is not happening currently,
- ^
But then allowing society to diligently give feedback, from simple disagreement up to structured punishment.
- ^
“We say “human” rationality, because we’re most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that’d apply to AIs and aliens too).”
- ^
This is the case as of 2025/09/13: “-15 karma from 10 votes on the post, −7 karma from 3 votes and −10 agreement karma from 3 votes on my first comment”. In response I have made 4 relatively small updates to the post, called out in the first line, to highlight the irony: I think that this validates my criticism and strengthens the post.
Update (2025/09/13, 3pm PST): The post at is currently at −17 karma from 12 votes. It has 17 comments (including my own).
I’ve made a few edits — not wholly-transformational, but admittedly making it easier for a reader to parse — since posting. These edits have come about directly as a result of discourse in the comments.
My moderately-strongly held view is that it is now in its final form and beautifully captures everything that I set out to do.
I’d be interested in anyone’s viewpoint upon a full re-read, if indeed they are willing to dedicate the time — I totally get it if not (it’s the weekend)!
I’m not sure that this final form will be effectively surfaced to any new readers on the site, since the karma is so low. I have two ongoing Private Conversations, with whom I am providing this same update, and I hope I may be able to at least solicit final form feedback from them.
My vision: perhaps now the post is high-quality and will accrue positive karma. That is to say that I’m calling the karmic bottom at −17.
- ^
I explicitly moderated my response by saying that I apologise and will revert my criticism of my interpretation of their comment if my interpretation is incorrect.
When I received the comment, the post vote tally was at “-3”, and I was presented with a Bayesian to evaluate: P (Comment was provided in accordance with my explicit request | I made an explicit request and post vote tally is at “-3″)
I admit that to better operationalise this I could have clarified for commenters that were following my Comment Guideline to explicitly “prepend [their comment] with “Downvote note:” ” — I have made this edit to the post.
Update (2025/09/13): The Presumed Downvoter followed up to clarify that they were not implying a reason to downvote this post, but they acknowledge that me reaching that conclusion was reasonable. Per the terms that I explicitly communicated, I’ve flipped my votes to approve and agree with their comment. I leave the rest of my comment unchanged, as a record of how I would engage with someone who was providing a reason to downvote that I disagree with.
In practice, I wouldn’t transparently say “If so that sounds logically flawed to me, and so I both disagree and have downvoted you.” which is unnecessarily confrontational — I would just do it silently. I only state it transparently here as part of operationalising my vision for deriving more signal from downvotes.
I want to say something like, you are not owed attention on your post just because it is written with good logic. That’s sort of harsh, but I do think that you have to earn the reader’s trust. People downvote for all sorts of reasons, not all of them are because of some logical mistake you made, sometimes it’s just because the post is not relevant, or seems elementary, or isn’t written well, or doesn’t engage with previous work.
I can understand getting unexplained downvotes being demoralizing, but demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.
But, I’m feeling generous today and I’ll try and write some of my thoughts anyway.
I found this post confusing to read, and had to go back and re-read the whole thing after reading it the first time to understand what you were even saying. For example one of the first sentences:
And yet, I don’t know what your recommendation even is yet. Take some time to explain your recommendation, and why I should care first, then I know what you’re talking about in this section.
There are similar sorts of problems all over the piece with assumptions that aren’t justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution. It’s just not a well-written piece, or so I judged it.
Hopefully that helps!
Thank you for the feedback!
On the path to benevolence there’s a whole lot of friction. I agree with most of what you said, and I think we can extract substantive value that builds on my post:
I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second “downvote click”. I believe that the platform experience for both post authors and readers could be significantly improved by adding a second post-level signal that only takes an additional few seconds — this could be a React like “Difficult to Parse” or a ~30-character tip like “Same ideas posted recently: [link]”.
Given the existing author/reader time-investment imbalance, it feels fair to suggest adding this.
This is a valid call-out — in fairness it was an imprecision on my part because I added the “Operationalising my recommendation” section in the first (2025/09/13) edit, and overlooked the fact that this meant it preceded my stating the recommendation. I’ve updated the post to state the recommendation upfront. [Meta note: This to me is the beauty of rationality and the LessWrong platform — we can co-create great logical works. I hope this doesn’t look too much like “relying on the reader to proof-read” in lieu of https://www.lesswrong.com/posts/nsCwdYJEpmW5Hw5Xm/lesswrong-is-providing-feedback-and-proofreading-on-drafts ]
This connects to the uncertainty I relayed to @Richard_Kennaway:
Uncertainty: I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument [to my post] is that I should just write my post like an automaton,[this use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
I really enjoy Scott Alexander’s writing and, while clearly he’s a far more distinguished and capable writer than me, I feel he is a good role-model as someone who uses rationality but also storytelling prose to try to relay their point. That’s effectively what I hope to accomplish — but I could only really get there if I get feedback on my writing.
I have three instances of posts in this style where I do successfully have some degree of positive feedback: [1], [2], [3]
At the same time, this is a red flag to me:
In [rare] cases where I do successfully compel someone to fully read and engage with my post, I have a huge duty to the outcome that they enjoy and find insightful value in my writing.
The last thing I want to be doing is to be actually wasting someone’s time.
You don’t get points for effort. Just for value.
One way to think of it is like you are selling some food in a market. Your potential buyers don’t care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read.
You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can’t decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.
I wholeheartedly agree with you.
There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:
To follow your analogy: I’m not asking that people purchase my sandwiches. I’m just asking that people clarify if they need them heated up and sliced in half, and don’t just tell everyone else in the market that my sandwiches suck.
This directly aligns with a plea I express in the current post:
I believe that there is value in my ideas, and I’m not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.
However in absence of this light, constructive feedback on LessWrong, I’m quite forcefully cast aside and constrained to other avenues.
This is [attempted] use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.
You may be interested in a very similar discussion from several months ago: When you downvote, explain why.
Are you implying that this post (“Visionary arrogance and a criticism of LessWrong voting”) should be downvoted because it reaches the same conclusion as a 7-month old post which lacks half of the framing (“visionary arrogance”) that I use to describe voting behaviour motivations?
If so that sounds logically flawed to me, and so I both disagree and have downvoted you.
If you were not implying that and simply offering some additional context for me to refer to (the discussion in the comments is valuable), then I apologise and will revert my downvoting.
Edit: Note to future readers pulled up from the nested conversation:
I explicitly moderated my response by saying that I apologise and will revert my criticism of my interpretation of their comment if my interpretation is incorrect.
I was intentionally trying to illustrate how I would see my recommendation playing out:
Contrarian post is made and receives downvote, with Downvoter comment providing justification.
Contrarian should agree or disagree with the Downvoter, and cast their vote accordingly.
Other readers can cast their votes on the comments of the Contrarian and the Downvoter — here we have rationally operationalised truth-seeking by transparently surfacing a weakness of the post via the Downvoters comment, hearing the Contrarian’s yielding or defence, and being able to rate both sides.
Maybe to better operationalise this I should have clarified “Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character note prepended with “Downvote note:” on what to improve.” because I’ve left myself open to bad-faith critics to miss the point.
The post is now at “-15” with 10 votes and it’s still unclear to me why — if nothing other than through misunderstanding the motivation behind my reply to @thenoviceoof and so to take retributive action against the post. I wrote 864 words that:
Suggested an opportunity to improve the LessWrong team’s model of supporting contrarian views with stability
Described mechanisms that suppress contrarian views
Described how a person confidently expressing non-normative ideas is easy to dismiss, despite this being a necessary condition of being a visionary free-thinker.
Nobody has engaged with these three core points and the damage is done: my contrarian view is suppressed, my future posts and comments hold less weight, and I’m disillusioned by the capacity of folks to engage with contrarian viewpoints in good faith.
There is absolutely no such implication—you were being offered a helpful reference, and reacting with this sort of hostility is entirely unwarranted.
My understanding of your actual post was that we should explain downvotes and help people grow, but you’re not making any effort to do that.
The first line of my post is “Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character note on what to improve.”
At the time of @thenoviceoof ‘s comment, this post was at “-3’ and I had no Reactions, and no comments besides that of @thenoviceoof. It is reasonable that I would conclude that @thenoviceoof downvoted the post, and provided their comment in accordance with my request — their comment being a justification of their downvote.
I explicitly moderated my response by saying that I apologise and will revert my criticism of my interpretation of their comment if my interpretation is incorrect.
Do you disagree with this logic? What did I do wrong?
It’s entirely plausible that someone else down-voted your post—correlation does not equal causation.
If you’re going to ask a question, it’s expected that you wait for an answer before acting. If you have enough uncertainty to ask, you don’t have enough certainty to downvote. There is nothing time-sensitive about downvoting someone.
This site in particular is prone to have fairly slow-paced conversational norms.
It is plausible, but as rationalists we deploy Bayesians — you should know that as The Dao of Bayes.
I’ve made 4 relatively small edits to the post (called out in the first line) which, together, I think significantly strengthen my argument. Akin to “activating my trap card”, lol.
I would love to hear from you whether you agree that this strengthens the argument that I make in the post. Additionally, thank you for continuing to engage with me, and I hope that I can convince you that I am trying to facilitate rationalist thinking in good faith.
One failure of logic is that you explicitly stated in your post that you already expected people to not follow this principle:
“I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.”
You cannot then claim that using it was reasonable.
Furthermore: regardless of whether the comment was in fact a response to your request in line with your requested guidelines, your first action in this discussion was to publically punish the one person who you believed to be following the guidelines you requested. You clearly have not examined the incentives you are creating here.
Edit: For the meta-meta-irony, I will also state that I downvoted and disagree-voted your comment replying to thenoviceoof for these reasons.
I don’t think that really tracks — both can easily be true:
I can receive a couple of negative post votes, absent of material feedback
I can receive a comment, see that the post vote tally is at “-3”, and be presented with a Bayesian to evaluate: P (Comment was provided in accordance with my explicit request | I made an explicit request and post vote tally is at “-3″)
The public punishment (which, again, I explicitly moderate in the same comment — the selective reading of those arguing against me is quite astonishing) was intentional to try to illustrate how I would see my recommendation playing out:
Contrarian post is made and receives downvote, with Downvoter comment providing justification.
Contrarian should agree or disagree with the Downvoter, and cast their vote accordingly.
Other readers can cast their votes on the comments of the Contrarian and the Downvoter — here we have rationally operationalised truth-seeking by transparently surfacing a weakness of the post via the Downvoters comment, hearing the Contrarian’s yielding or defence, and being able to rate both sides.
Maybe to better operationalise this I should have clarified “Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character note prepended with “Downvote note:” on what to improve.” because I’ve left myself open to bad-faith critics to miss the point.
My karma is getting obliterated in faithfully trying to facilitate this discourse — do you think that is interesting?
The post is now at “-9” with 8 votes and it’s still unclear to me why — if nothing other than through misunderstanding the motivation behind my reply to @thenoviceoof and so to take retributive action against the post. I wrote 864 words that:
Suggested an opportunity to improve the LessWrong team’s model of supporting contrarian views with stability
Described mechanisms that suppress contrarian views
Described how a person confidently expressing non-normative ideas is easy to dismiss, despite this being a necessary condition of being a visionary free-thinker.
Nobody has engaged with these three points and the damage is done: my contrarian view is suppressed, my future posts and comments hold less weight, and I’m disillusioned by the capacity of folks to engage with contrarian viewpoints in good faith.
If I say “just kidding, I think the current system is perfect” — now I’m part of your in-group and you can revert your downvote?
I note that in your leading argument, you do not claim that you did do a Bayesian evaluation, nor even that someone could have done one and received a result that aligns with your conclusion. Just that it’s possible that someone could be presented with a probability to evaluate (which could have any result). That seems like a bad-faith evasion to me.
Did you actually evaluate the probability?
I have downvoted your comment but will revert it if you actually did do a Bayesian evaluation and present it in a follow-up comment, and it looks correct, and in line with your statement that “It is reasonable that I would conclude that”. That’s the norm you’re establishing in this comment chain, after all. A norm that, if it’s not obvious already, I think is both harmful to constructive discussion and wasteful of people’s time.
After that, then we can move on to the rest of the discussion with the results in mind.
Thank you for your densely rich reactions to my comment — I see validity in most of them.
Yes, evaluating the probability in my mind looked like:
Events:
R: I posted the explicit Request as the first line of the post: “Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character note”
CFP: A Comment was observed that Fulfilled Part of my requested format — it gave a reactionary note, and was on the order of characters I requested (~100 vs. my request of >=30 )
VN: The Vote tally was Negative on the order of a couple of votes
F: A comment was observed that Fulfilled all of my requested format (i.e CFP + it is a Downvoter providing a note) — this is what I evaluated to justify looking for a “Downvote-worthy” implication from their comment.
D: The commenter downvoted
The conditional probability I intuited:
P ( F=1 | R = 1, CFP = 1, VN = 1) = P ( D=1 | R = 1, CFP = 1, VN = 1) = x
My evaluation: I felt that x would be high, i.e >= 0.75 because of the intersection of R, CFP and VN
I admit that I “jumped the gun” by acting on this Bayesian instead of first asking for clarification, and then responding. I had a few motivations for this:
Receiving a comment satisfying my Comment Guideline (event F above) was necessary for me to be able to operationalise my idea: that is to say that I wanted to be able to agree or disagree and state that because of this I was going to “hold them accountable” and vote on their comment positively or negatively.
I was in the mood to write and express myself, and didn’t want to wait — indeed the first-commenter in question only provided clarification 16 hours later. In the meantime, my karma was obliterated.
Given (1), (2) and my high evaluated probability of F, I figured it was worth it on balance to push forwards.
I also see now that giving attention to spelling out why I was assuming that they were “implying that my post should be downvoted” — with the formalised Bayesian above — could have facilitated better rationalist discourse. My excuse would be that my attention was pulled in a few different directions, and I prioritised simply showcasing the operationalised version of my recommendation/ feature request.
On your disagreement to “Nobody has engaged with these points” I think I agree with you and could be more precise with my statement. I think nobody had, at least when I wrote that, engaged directly along the lines of reasoning of one of those points. However, engagement like “You may be interested in a very similar discussion” or “Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.” does engage with those points at a meta-level, in terms of providing constructive feedback for the post.
Okay, so no actual Bayesian calculation, just an intuition.
Your post made the claim that it was substantially likely that most downvotes had no corresponding comment. If we look at this as a set of probabilities over readers R, it seems reasonable to model in terms of P(F_r), P(D_r), P(U_r), P(C_r), and P(CFP_r) for each reader r. C_r is the event of a reader providing a comment at all (whether or not it is a CFP comment).
Your expectation required P(D_r) > P(U_r), since you expected the post to be overall downvoted. This condition also implies that at some point VN holds. You also expected P(F_r | D_r) to be low, say < 0.3. If P(F_r | D_r) were higher, then you could not reasonably expect to see multiple downvotes with no corresponding explanatory comment.
Now let us examine P(CFP_r | C_r). Looking over the site, almost all comments are in some way reactionary to the thing they are commenting on, and all but a tiny minority are >= 30 characters. So P(CFP_r | C_r) > 0.8 is likely in the background and not just under condition F_r. Also looking at other posts, the number of votes seems to be on average about half the number of comments so P(C_r) ~= (1/2) (P(D_r) + P(U_r)).
Having made a specific request (that you did not expect to be followed), did you expect to see fewer comments as a fraction of votes overall, compared with other posts? You didn’t appear to think so, or it should have shown in your reasoning above. Likewise for P(CFP_r | C_r, R).
The condition VN is roughly the case D-U=2 (in this case I think it was exactly D=2, U=0), so your expectation E[C | VN, R] should have been around 1 to 3, and E[CFP | VN, R] about 1 to 2. You should also have expected E[F | VN, R] < 0.6.
So it seems to me to be quite a mistake to conclude P(F_r | CFP_r, R, VN) > 0.75.
That’s even without considering the nature of the comment itself, which made no criticism of your post at all and appeared to be more informative linking it to a previous discussion on the matter than anything else.
Sorry, I think you’re putting far too much weight on something that is not my position.
My closing line, verbatim:
If I thought this was true “it [is] substantially likely that most downvotes had no corresponding comment.”, that would look like me closing with:
I’m describing, explicitly in my post, a different phenomenon that contains more nuance: specifically low signal early votes that suppress visibility.
I say:
and
and in my discussion with @Drake Morrison :
None of this looks like the claim: “it was substantially likely that most downvotes had no corresponding comment.”
If you want to adjust your calculation, you would need to account for my true position which is that VN (recall: my post having a couple of negative votes) is a prerequisite for a comment that is made to be from a Downvoter.
However, obviously it’s also a small sample size. That means it’s high variability, and we shouldn’t put much weight on it.
To simplify things, even though VN is a prerequisite in my view we can even drop it (due to it holding little weight), so we’re approximately evaluating:
P ( F=1 | R = 1, CFP = 1) = P ( D=1 | R = 1, CFP = 1) = x
Edit:
And implicit in this rebalancing (maybe) — adding this edit to clarify — is that I don’t agree with this:
I absolutely did not feel that CFP was just background noise. I gave significant weight to the fact that the first line (R) explicitly requested comments of the form of CFP.
To be clear, I didn’t downvote you: I did think “hmm, wasn’t there a recent big discussion around downvote-without-commenting norms which didn’t result in any changes?” and went and found it. I can see why you’d think I did downvote you; you specifically requested it! (Well, requested `if downvote then comment`)
Haha well I alienated a lot of people by inferring that and using it to operationalise my recommendation, but I appreciate you clarifying this and acknowledging that the conclusion that I reached was reasonable.
Per the terms that I explicitly communicated, I’ve flipped my votes to approve and agree with your comment.
Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.
I have not voted either way on the current post.
Thank you for providing your evaluative criteria.
To me you hit on a precise, valid downvote signal: “this post is effortful for me to falsify”. That would be helpful to writers like me to receive as a precise, labelled signal in order to optimise.
The disconnect I guess is that to me, all of my logic is robustly defensible and wholly laid out within the post. That’s precisely why I’m so keen on someone, anyone, being able to precisely state any logical inconsistency or area that lacks clarity.
If they were to do so, then I could expand on the area where our world-views/maps [https://www.lesswrong.com/w/map-and-territory] are too distinct, in pursuit of correcting one of our maps. To me this is the essence of rationality, and it’s jarring that I’m not able to get it on this platform.
Since I have to defer to an LLM for this: ChatGPT5 Pro gives me the following checklist [points 1 through 5, all other language is my own] to avoid being “not even wrong”, I’ve added my view on how strongly I’m doing against each item:
State one main claim in plain language.
Strongly achieved: “we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.”
Define key terms (what exactly do you mean by X?).
Moderately achieved — I could have formatted this differently, instead of containing it within the prose:
Contrarian — surely is a standard term, I illustrated it as “Being Peculiar” or exhibiting “visionary arrogance”
“High quality content” — stated as “[content that is] usually heavily upvoted [on the platform]”
“devise mechanisms” — stated both as “implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.” and self-referentially on this post as I describe the “comment following my guideline” → “me responding” → “readers casting their vote” dynamic
“the platform as a whole” — described LessWrong, and its central mission
Show your reasoning chain: premises → inference → conclusion.
Strongly achieved:
Premises: Spend hours putting together a post that contains a contrarian view that I’m proud of, but then receive no feedback besides a couple of downvotes.
Inference: Contrarian view is too loosely dismissed on the platform “The only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm… [but] people with societally normative viewpoints will disagree with you.”
Conclusion: “I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.” — this has extended to −16 karma across 11 votes, and still nobody has engaged to offer a logical inconsistency.
Cite evidence and say what would change your mind.
Strongly achieved:
Using my great wit,[this is use of hyperbole for humour] I self-referentially operationalised the post to illustrate my point. What would change my mind is if people actually upvoted me.
Quantify uncertainty (even roughly) and address the strongest counterargument.
Weakly achieved — I guess I was leaving this open for audience participation.
Uncertainty: I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument is that I should just write my post like an automaton,[this is also use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the humour and illustrative devices that come naturally to me.
The problem with this is that writing like that isn’t fun to me and doesn’t come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe my post wholly consists of robust logic and satisfies this “avoid being ‘not even wrong’” checklist. If I had provided this checklist at the top of my post, would it have made my post easier to parse and thus well-received by the community? Or am I still missing something?
I agree that this is a problem. I don’t think the best way to fix it is to either change culture or change the voting system. There is probably a change to the site that would help with it. The trouble is that when something has a lot of hard-to-interpret noise in it, as is usually the case with both crackpot and crackpot-flavor-but-actually-insightful expert rambling, it’s hard to spend the time to figure out if the details resolve one way or the other. Also, like, experts can output crackpottery on their own field of expertise sometimes (I didn’t have anyone particular in mind
besides yann, I asked Sonnet 4.5, who suggested Linus Pauling on vitamin c, Lord Kelvin on age of earth, Fred Hoyle and steady state universe).Like, the whole reason we have the scientific standards we do is that even if one is an expert in a field who has made previous verified breakthroughs, it’s really easy to have a brilliant, wrong idea. Maybe the value here is in making it easy to tell whether other people will find your post easy to follow? probably the primary thing I’d suggest would be trying to organize the post progressive-jpeg-style: try to fit as much as possible as early as possible, so that it becomes clear quickly why your post is relevant-or-not for a given reader. also just, try to compress as much as you can.
of course, these are annoyingly high standards. your grid dynamic thing seems like a thing I’ve seen happen. it’s probably at least some of why I feel motivated to comment on low-upvote wacky posts. it’s a bit of a chore to do well, though, and probably the primary reason I do it is procrastination.
if cultural things are viable, probably a good one would be people being very willing to put reacts when they downvote, yeah.
(I didn’t read your post in full because I found it to be taking longer to parse than I felt like spending, fwiw. I’m responding to a skim.)
Thank you for sharing your expert insight!
This is a fair point and in some cases it’s not too much additional cognitive load to structure things this way. I have noticed though that it can be ”...complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly”. Other times the fun and the motivation in writing is from crafting the narrative creatively. If narratives have to follow line by line then we wouldn’t get things like Infinite Jest.
A low-cost idea I had that could help: folks who get their post or comment downvoted could receive a message linking back to the New User’s Guide to LessWrong but mainly up-front highlighting that these contra-contrarian forces exist, and “If you’ve been downvoted and/or rate-limited, don’t take it too hard. LessWrong has fairly particular standards. My recommendation is to read some of the advice at the end here and try again.”[1]
I’ve spoken with multiple smart rationalist people in person who have described being discouraged from writing on LessWrong because of echo chamber effects / imbalanced curation.
https://www.lesswrong.com/posts/hHyYph9CcYfdnoC5j/automatic-rate-limiting-on-lesswrong
Sorry, to be clear, this is not a valid comment guideline on LessWrong. The current moderation system allows authors to moderate comments (assuming they have the necessary amount of karma). It does not allow authors to change how people vote. I can imagine at some point maybe doing something here, but it seems dicey, and is not part of how LessWrong currently works.
Got it — apologies for bending these rules (I didn’t consider that this may break a rule) as an attempt to operationalise the post.
Can I state a lighter version instead, where I encourage all standard voting behaviour, but append a request for downvote justification? I’ve replaced the Comment Guideline accordingly as a placeholder until I receive further clarification.
I.e:
*** Voting Guideline: You should freely vote and react according to your views and LessWrong norms — I do not want to infringe upon this. ***
*** Comment Request: However, please allow me to make a request: if you do downvote this post and are willing to make that transparent, it would help me to operationalise the recommendation I put across in this post if you add a Reaction or a 30+ character comment prepended with “Downvote note:” on what to improve. ***
Definitely! Requests are totally fine!