I wrote a userscript / Chrome extension / zero-installation bookmarklet to make finding recent comments over at Slate Star Codex a lot easier. Observe screenshots. I’ll also post this next time SSC has a new open thread (unless Yvain happens to notice this).
Bakkot
The Stable State is Broken
LW client-side comment improvements
I would strongly support just banning culture war stuff from LW 2.0. Those conversations can be fun, but they require disproportionately large amounts of work to keep the light / heat ratio decent (or indeed > 0), and they tend to dominate any larger conversation they enter. Besides, there’s enough places for discussion of those topics already.
(For context: I moderate /r/SlateStarCodex, which gets several thousand posts in its weekly culture war thread every single week. Those discussions are a lot less bad than culture war discussions on the greater internet, I think, and we do a pretty good job keeping discussion to that thread only, but maintaining both of these requires a lot of active moderation, and the thread absolutely affects the tone of the rest of the subreddit even so.)
I wrote a userscript to add a delay and checkbox reading “I swear by all I hold sacred that this comment supports the collective search for truth to the very best of my abilities.” before allowing you to comment on LW. Done in response to a comment by army1987 here.
Edit: per NancyLebovitz and ChristianKl below, solicitations for alternative default messages are welcomed.
I happen to be studying model theory at the moment. For anyone curious, when Eliezer say ‘If X ⊢ Y, then X ⊨ Y’ (that is, if a model proves a statement, that statement is true in the model), this is known as soundness. The converse is completeness, or more specifically semantic completeness, which says that if a statement is true in every model of a theory (in other words, in every possible world where that theory is true), then there is a finite proof of the statement. In symbols this is ‘If X ⊨ Y, then X ⊢ Y’. Note that this notion of ‘completeness’ is not the one used in Gödel’s incompleteness theorems.
My experience has been exactly contrary: young communities thrive without gardening, but as they grow they either devolve into low average value (digg as it was, most large subreddits) or are heavily pruned (HN, r/askscience). If there’s an influx of people, heavy moderation is mandatory if you want to avoid regression to the mean.
Even a friendly AI would view the world in which it’s out of the box as vastly superior to the world in which it’s inside the box. (Because it can do more good outside of the box.) Offering advice is only the friendly thing to do if it maximizes the chance of getting let out, or if the chances of getting let out before termination are so small that the best thing it can do is offer advice while it can.
I’d be very interested in a citation on
the evidence shows that teacher recommendations have zero correlation with aptitude in a field
I’m told, and quite willing to believe, that your salary has more to do with the five minutes of salary negotiation than the next several years of work. I am also told that salary negotiation is very much a skill.
As such, it seems it would be worth a fairly substantial amount of time and money to practice and/or get coaching in this skill. Is this done? That is, how likely am I to be able to find someone, preferably someone who has worked on the business end of salary negotiation at somewhere like Google, who I can pay to practice salary negotiation with?
ETA: I’ve read extensively about how to negotiate (though of course there’s always something more). What I’m interested in is practice.
Eh, yes and no. This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong and frequently dangerous and deserves close attention, and I think it mostly fails here. In very, very specific instances (GiveWell-esque philanthropy, eg), maybe not, but in terms of, say, feminism? If anyone on LW is interested tackling feminist issues, having very few women would be a major issue. Even when not addressing specific issues, if you’re trying to develop models of how human beings think, and everyone in the conversation is a very specific sort of person, you’re going to have a much harder time getting it right.