I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second “downvote click”.
You don’t get points for effort. Just for value.
One way to think of it is like you are selling some food in a market. Your potential buyers don’t care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read.
You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can’t decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.
There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:
It’s complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly… but until I discover something that I’m comfortable falsifies my hypothesis, I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.
To follow your analogy: I’m not asking that people purchase my sandwiches. I’m just asking that people clarify if they need them heated up and sliced in half, and don’t just tell everyone else in the market that my sandwiches suck.
This directly aligns with a plea I express in the current post:
The strongest counterargument is that I should just write my post like an automaton,[1] I add a footnote clarifying, instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
The problem with this is that writing like that isn’t fun to me and doesn’t come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe that all of my logic is robustly defensible and wholly laid out within the post.
I believe that there is value in my ideas, and I’m not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.
However in absence of this light, constructive feedback on LessWrong, I’m quite forcefully cast aside and constrained to other avenues. Epistemic status: I attend a weekly rationality meetup in Los Angeles, I attend AI Safety and AI Alignment Research meet-ups in Los Angeles and San Francisco, and I work directly on and with frontier AI solutions.
This is [attempted] use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.
You don’t get points for effort. Just for value.
One way to think of it is like you are selling some food in a market. Your potential buyers don’t care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read.
You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can’t decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.
I wholeheartedly agree with you.
There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:
To follow your analogy: I’m not asking that people purchase my sandwiches. I’m just asking that people clarify if they need them heated up and sliced in half, and don’t just tell everyone else in the market that my sandwiches suck.
This directly aligns with a plea I express in the current post:
I believe that there is value in my ideas, and I’m not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.
However in absence of this light, constructive feedback on LessWrong, I’m quite forcefully cast aside and constrained to other avenues. Epistemic status: I attend a weekly rationality meetup in Los Angeles, I attend AI Safety and AI Alignment Research meet-ups in Los Angeles and San Francisco, and I work directly on and with frontier AI solutions.
This is [attempted] use of hyperbole for humour — instead of “like an automaton”, precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.