On the importance of Less Wrong, or another single conversational locus

Epistemic status: My actual best bet. But I used to think differently; and I don’t know how to fully explicate the updating I did (I’m not sure what fully formed argument I could give my past self, that would cause her to update), so you should probably be somewhat suspicious of this until explicated. And/​or you should help me explicate it.
It seems to me that:
  1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

  2. Despite all priors and appearances, our little community (the “aspiring rationality” community; the “effective altruist” project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle. This sounds like hubris, but it is at this point at least partially a matter of track record.[1]

  3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context—everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about “ways of thinking”—both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]

  4. One feature that is pretty helpful here, is if we somehow maintain a single “conversation”, rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By “a conversation”, I mean a space where people can e.g. reply to one another; rely on shared jargon/​shorthand/​concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

  5. One feature that really helps things be “a conversation” in this way, is if there is a single Schelling set of posts/​etc. that people (in the relevant community/​conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

  6. We have lately ceased to have a “single conversation” in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you’re in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

It seems to me, moreover, that Less Wrong used to be such a locus, and that it is worth seeing whether Less Wrong or some similar such place[3] may be a viable locus again. I will try to post and comment here more often, at least for a while, while we see if we can get this going. Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.
I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having “a conversation” with better structural integrity /​ more coherence, but that the value created thereby is pretty distributed). Insofar as there are “externalized benefits” to be had by blogging/​commenting/​reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one’s “make the world better” effort. (At least if we can build up toward in fact having a single locus.)
If you believe this is so, I invite you to join with us. (And if you believe it isn’t so, I invite you to explain why, and to thereby help explicate a shared body of arguments as to how to actually think usefully in common!)
[1] By track record, I have in mind most obviously that AI risk is now relatively credible and mainstream, and that this seems to have been due largely to (the direct + indirect effects of) Eliezer, Nick Bostrom, and others who were poking around the general aspiring rationality and effective altruist space in 2008 or so, with significant help from the extended communities that eventually grew up around this space. More controversially, it seems to me that this set of people has probably (though not indubitably) helped with locating specific angles of traction around these problems that are worth pursuing; with locating other angles on existential risk; and with locating techniques for forecasting/​prediction (e.g., there seems to be similarity between the techniques already being practiced in this community, and those Philip Tetlock documented as working).
[2] Again, it may seem somewhat hubristic to claim that that a relatively small community can usefully add to the world’s analysis across a broad array of topics (such as the summed topics that bear on “How do we create an existential win?”). But it is generally smallish groups (rather than widely dispersed millions of people) that can actually bring analysis together; history has often involved relatively small intellectual circles that make concerted progress; and even if things are already known that bear on how to create an existential win, one must probably still combine and synthesize that understanding into a smallish set of people that can apply the understanding to AI (or what have you).
It seems worth a serious try to see if we can become (or continue to be) such an intellectually generative circle; and it seems worth asking what institutions (such as a shared blogging platform) may increase our success odds.
[3] I am curious whether Arbital may become useful in this way; making conversation and debate work well seems to be near their central mission. The Effective Altruism Forum is another plausible candidate, but I find myself substantially more excited about Less Wrong in this regard; it seems to me one must be free to speak about a broad array of topics to succeed, and this feels easier to do here. The presence and easy linkability of Eliezer’s Less Wrong Sequences also seems like an advantage of LW.
Thanks to Michael Arc (formerly Michael Vassar) and Davis Kingsley for pushing this/​related points in conversation.