Steven K
steven0461
I could never stand when people made thinking mistakes, especially me.
I got into OB-style rationalism via Eliezer’s writings on the Thing Not To Be Named. I got into that subject via >H and futurist sites (McCarthy, Bostrom, Sandberg, Pearce, Moravec, Hanson).
If agreement votes aren’t going to be used, why not do away with them altogether and just use the current system to vote based on quality only? True comments are higher quality than false comments so agreement should factor into quality judgments anyway.
I agree it’s losing information, but that’s something you have to weigh against the inconvenience of multiple dimensions. To the extent that truth is positively correlated with quality you’re just making people click twice, and I suspect clicks are a limited resource.
As I see it the voting system is there to put comments in a convenient order and remove the really bad ones from sight, not to provide opinion poll information.
Before people can submit their own posts, it would be good to have it spelled out what’s considered on-topic.
The header image is almost 400kb; that seems like a lot.
OK, so according to you and Benja the point is to have the agree/disagree buttons there mainly as a lightning rod to prevent agreement from affecting quality votes. That’s a good point, but I wonder if it’s worth it and if there are better ways to accomplish the same thing.
I also wonder if there should be a button labeled “malevolent cantaloupe” so the unserious people will click on that instead of voting.
A problem here is that it takes something like tens or hundreds of thousands of hands for the signal to emerge from the noise.
With comment karma we should definitely stop trying to use upvotes and downvotes for opinion polls.
If I gave the book to a friend it would probably be for carefully-argued futurism content.
If karma is the sum of individual post scores, does that reward quantity too much relative to quality?
Should I? Is this a common outcome?
I don’t know if karma is itself a good measure of rationality, but it might be a good subject to train calibration on. E.g., whenever you make a post or comment there could be an optional field where you put in your expectation and SD for what the post’s or the comment’s score will be one week later.
As I see it, rationality is much more about choosing the right things to use one’s success for than it is about achieving success (in the conventional sense). Hopefully it also helps with the latter, but it may well be that rationality is detrimental to people’s pursuit of various myopic, egoist, and parochial goals that they have, but that they would reject or downgrade in importance if they were more rational.
You realize, of course, that under this policy everyone stays Christian forever.
I agree. In theory it seems studying the case that deviates more from the average (in this case, great success) should yield more information. If most widgets are small and a few are big, and you want to know what properties of widgets correlate with size, and you can study only one widget, you should study a big widget.
Great post, food for thought. I sometimes distinguish between beliefs and impressions, but should do so more.
If ideas change incrementally by mutation, and the average false idea does more damage than the truth, and as ideas get closer to the truth they trend noisily toward doing less damage, is that a general moral argument against spreading and believing specific false ideas that seem beneficial (both because the neighbors of beneficial-seeming false ideas regress to a more damaging mean than the neighbors of the truth, and because the truth gains some stability against mutations by being the truth)?
Wait, couldn’t people have been programmed by evolution to grieve no matter what they truly believe about where the deceased went?
Not counting the free first point for every comment toward karma would be an improvement, I think.
Specifically, should posts be about rationality, or can they be “mere” applications of rationalist insights to specific topics?
How to make sense out of metaethics. I would particularly name The Meaning of Right.