I want to say something like, you are not owed attention on your post just because it is written with good logic. That’s sort of harsh, but I do think that you have to earn the reader’s trust. People downvote for all sorts of reasons, not all of them are because of some logical mistake you made, sometimes it’s just because the post is not relevant, or seems elementary, or isn’t written well, or doesn’t engage with previous work.
I can understand getting unexplained downvotes being demoralizing, but demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.
But, I’m feeling generous today and I’ll try and write some of my thoughts anyway.
I found this post confusing to read, and had to go back and re-read the whole thing after reading it the first time to understand what you were even saying. For example one of the first sentences:
On this post I will intentionally try to illustrate how I would see my recommendation playing out:
And yet, I don’t know what your recommendation even is yet. Take some time to explain your recommendation, and why I should care first, then I know what you’re talking about in this section.
There are similar sorts of problems all over the piece with assumptions that aren’t justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution. It’s just not a well-written piece, or so I judged it.
On the path to benevolence there’s a whole lot of friction. I agree with most of what you said, and I think we can extract substantive value that builds on my post:
Demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.
I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second “downvote click”. I believe that the platform experience for both post authors and readers could be significantly improved by adding a second post-level signal that only takes an additional few seconds — this could be a React like “Difficult to Parse” or a ~30-character tip like “Same ideas posted recently: [link]”.
Given the existing author/reader time-investment imbalance, it feels fair to suggest adding this.
Take some time to explain your recommendation, and why I should care first, then I know what you’re talking about in this section.
This is a valid call-out — in fairness it was an imprecision on my part because I added the “Operationalising my recommendation” section in the first (2025/09/13) edit, and overlooked the fact that this meant it preceded my stating the recommendation. I’ve updated the post to state the recommendation upfront. [Meta note: This to me is the beauty of rationality and the LessWrong platform — we can co-create great logical works. I hope this doesn’t look too much like “relying on the reader to proof-read” in lieu of https://www.lesswrong.com/posts/nsCwdYJEpmW5Hw5Xm/lesswrong-is-providing-feedback-and-proofreading-on-drafts ]
There are similar sorts of problems all over the piece with assumptions that aren’t justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution.
Uncertainty: I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument [to my post] is that I should just write my post like an automaton,[this use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
I really enjoy Scott Alexander’s writing and, while clearly he’s a far more distinguished and capable writer than me, I feel he is a good role-model as someone who uses rationality but also storytelling prose to try to relay their point. That’s effectively what I hope to accomplish — but I could only really get there if I get feedback on my writing.
I have three instances of posts in this style where I do successfully have some degree of positive feedback: [1], [2], [3]
At the same time, this is a red flag to me:
It’s just not a well-written piece, or so I judged it.
In [rare] cases where I do successfully compel someone to fully read and engage with my post, I have a huge duty to the outcome that they enjoy and find insightful value in my writing.
The last thing I want to be doing is to be actually wasting someone’s time.
I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second “downvote click”.
You don’t get points for effort. Just for value.
One way to think of it is like you are selling some food in a market. Your potential buyers don’t care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read.
You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can’t decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.
There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:
It’s complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly… but until I discover something that I’m comfortable falsifies my hypothesis, I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.
To follow your analogy: I’m not asking that people purchase my sandwiches. I’m just asking that people clarify if they need them heated up and sliced in half, and don’t just tell everyone else in the market that my sandwiches suck.
This directly aligns with a plea I express in the current post:
The strongest counterargument is that I should just write my post like an automaton,[1] I add a footnote clarifying, instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
The problem with this is that writing like that isn’t fun to me and doesn’t come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe that all of my logic is robustly defensible and wholly laid out within the post.
I believe that there is value in my ideas, and I’m not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.
However in absence of this light, constructive feedback on LessWrong, I’m quite forcefully cast aside and constrained to other avenues. Epistemic status: I attend a weekly rationality meetup in Los Angeles, I attend AI Safety and AI Alignment Research meet-ups in Los Angeles and San Francisco, and I work directly on and with frontier AI solutions.
This is [attempted] use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.
I want to say something like, you are not owed attention on your post just because it is written with good logic. That’s sort of harsh, but I do think that you have to earn the reader’s trust. People downvote for all sorts of reasons, not all of them are because of some logical mistake you made, sometimes it’s just because the post is not relevant, or seems elementary, or isn’t written well, or doesn’t engage with previous work.
I can understand getting unexplained downvotes being demoralizing, but demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.
But, I’m feeling generous today and I’ll try and write some of my thoughts anyway.
I found this post confusing to read, and had to go back and re-read the whole thing after reading it the first time to understand what you were even saying. For example one of the first sentences:
And yet, I don’t know what your recommendation even is yet. Take some time to explain your recommendation, and why I should care first, then I know what you’re talking about in this section.
There are similar sorts of problems all over the piece with assumptions that aren’t justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution. It’s just not a well-written piece, or so I judged it.
Hopefully that helps!
Thank you for the feedback!
On the path to benevolence there’s a whole lot of friction. I agree with most of what you said, and I think we can extract substantive value that builds on my post:
I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second “downvote click”. I believe that the platform experience for both post authors and readers could be significantly improved by adding a second post-level signal that only takes an additional few seconds — this could be a React like “Difficult to Parse” or a ~30-character tip like “Same ideas posted recently: [link]”.
Given the existing author/reader time-investment imbalance, it feels fair to suggest adding this.
This is a valid call-out — in fairness it was an imprecision on my part because I added the “Operationalising my recommendation” section in the first (2025/09/13) edit, and overlooked the fact that this meant it preceded my stating the recommendation. I’ve updated the post to state the recommendation upfront. [Meta note: This to me is the beauty of rationality and the LessWrong platform — we can co-create great logical works. I hope this doesn’t look too much like “relying on the reader to proof-read” in lieu of https://www.lesswrong.com/posts/nsCwdYJEpmW5Hw5Xm/lesswrong-is-providing-feedback-and-proofreading-on-drafts ]
This connects to the uncertainty I relayed to @Richard_Kennaway:
Uncertainty: I’m uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
The strongest counterargument [to my post] is that I should just write my post like an automaton,[this use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
I really enjoy Scott Alexander’s writing and, while clearly he’s a far more distinguished and capable writer than me, I feel he is a good role-model as someone who uses rationality but also storytelling prose to try to relay their point. That’s effectively what I hope to accomplish — but I could only really get there if I get feedback on my writing.
I have three instances of posts in this style where I do successfully have some degree of positive feedback: [1], [2], [3]
At the same time, this is a red flag to me:
In [rare] cases where I do successfully compel someone to fully read and engage with my post, I have a huge duty to the outcome that they enjoy and find insightful value in my writing.
The last thing I want to be doing is to be actually wasting someone’s time.
You don’t get points for effort. Just for value.
One way to think of it is like you are selling some food in a market. Your potential buyers don’t care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read.
You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can’t decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.
I wholeheartedly agree with you.
There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:
To follow your analogy: I’m not asking that people purchase my sandwiches. I’m just asking that people clarify if they need them heated up and sliced in half, and don’t just tell everyone else in the market that my sandwiches suck.
This directly aligns with a plea I express in the current post:
I believe that there is value in my ideas, and I’m not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.
However in absence of this light, constructive feedback on LessWrong, I’m quite forcefully cast aside and constrained to other avenues. Epistemic status: I attend a weekly rationality meetup in Los Angeles, I attend AI Safety and AI Alignment Research meet-ups in Los Angeles and San Francisco, and I work directly on and with frontier AI solutions.
This is [attempted] use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.