On the importance of Less Wrong, or another single conversational locus

Epistemic sta­tus: My ac­tual best bet. But I used to think differ­ently; and I don’t know how to fully ex­pli­cate the up­dat­ing I did (I’m not sure what fully formed ar­gu­ment I could give my past self, that would cause her to up­date), so you should prob­a­bly be some­what sus­pi­cious of this un­til ex­pli­cated. And/​or you should help me ex­pli­cate it.
It seems to me that:
  1. The world is locked right now in a deadly puz­zle, and needs some­thing like a mir­a­cle of good thought if it is to have the sur­vival odds one might wish the world to have.

  2. De­spite all pri­ors and ap­pear­ances, our lit­tle com­mu­nity (the “as­piring ra­tio­nal­ity” com­mu­nity; the “effec­tive al­tru­ist” pro­ject; efforts to cre­ate an ex­is­ten­tial win; etc.) has a shot at se­ri­ously helping with this puz­zle. This sounds like hubris, but it is at this point at least par­tially a mat­ter of track record.[1]

  3. To aid in solv­ing this puz­zle, we must prob­a­bly find a way to think to­gether, ac­cu­mu­la­tively. We need to think about tech­ni­cal prob­lems in AI safety, but also about the full sur­round­ing con­text—ev­ery­thing to do with un­der­stand­ing what the heck kind of a place the world is, such that that kind of place may con­tain cheat codes and trap doors to­ward achiev­ing an ex­is­ten­tial win. We prob­a­bly also need to think about “ways of think­ing”—both the in­di­vi­d­ual think­ing skills, and the com­mu­nity con­ver­sa­tional norms, that can cause our puz­zle-solv­ing to work bet­ter. [2]

  4. One fea­ture that is pretty helpful here, is if we some­how main­tain a sin­gle “con­ver­sa­tion”, rather than a bunch of peo­ple sep­a­rately hav­ing thoughts and some­times tak­ing in­spira­tion from one an­other. By “a con­ver­sa­tion”, I mean a space where peo­ple can e.g. re­ply to one an­other; rely on shared jar­gon/​short­hand/​con­cepts; build on ar­gu­ments that have been es­tab­lished in com­mon as prob­a­bly-valid; point out ap­par­ent er­rors and then have that point­ing-out be ac­tu­ally taken into ac­count or else replied-to).

  5. One fea­ture that re­ally helps things be “a con­ver­sa­tion” in this way, is if there is a sin­gle Schel­ling set of posts/​etc. that peo­ple (in the rele­vant com­mu­nity/​con­ver­sa­tion) are sup­posed to read, and can be as­sumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly de­sir­able to form a new such place if we can.

  6. We have lately ceased to have a “sin­gle con­ver­sa­tion” in this way. Good con­tent is still be­ing pro­duced across these com­mu­ni­ties, but there is no sin­gle lo­cus of con­ver­sa­tion, such that if you’re in a gath­er­ing of e.g. five as­piring ra­tio­nal­ists, you can take for granted that of course ev­ery­one has read posts such-and-such. There is no one place you can post to, where, if enough peo­ple up­vote your writ­ing, peo­ple will re­li­ably read and re­spond (rather than ig­nore), and where oth­ers will call them out if they later post rea­son­ing that ig­nores your ev­i­dence. Without such a lo­cus, it is hard for con­ver­sa­tion to build in the cor­rect way. (And hard for it to turn into ar­gu­ments and replies, rather than a se­ries of non se­quiturs.)

It seems to me, more­over, that Less Wrong used to be such a lo­cus, and that it is worth see­ing whether Less Wrong or some similar such place[3] may be a vi­able lo­cus again. I will try to post and com­ment here more of­ten, at least for a while, while we see if we can get this go­ing. Sarah Con­stantin, Ben Hoff­man, Valen­tine Smith, and var­i­ous oth­ers have re­cently men­tioned plan­ning to do the same.
I sus­pect that most of the value gen­er­a­tion from hav­ing a sin­gle shared con­ver­sa­tional lo­cus is not cap­tured by the in­di­vi­d­ual gen­er­at­ing the value (I sus­pect there is much dis­tributed value from hav­ing “a con­ver­sa­tion” with bet­ter struc­tural in­tegrity /​ more co­her­ence, but that the value cre­ated thereby is pretty dis­tributed). In­so­far as there are “ex­ter­nal­ized benefits” to be had by blog­ging/​com­ment­ing/​read­ing from a com­mon plat­form, it may make sense to re­gard one­self as ex­er­cis­ing civic virtue by do­ing so, and to de­liber­ately do so as one of the uses of one’s “make the world bet­ter” effort. (At least if we can build up to­ward in fact hav­ing a sin­gle lo­cus.)
If you be­lieve this is so, I in­vite you to join with us. (And if you be­lieve it isn’t so, I in­vite you to ex­plain why, and to thereby help ex­pli­cate a shared body of ar­gu­ments as to how to ac­tu­ally think use­fully in com­mon!)
[1] By track record, I have in mind most ob­vi­ously that AI risk is now rel­a­tively cred­ible and main­stream, and that this seems to have been due largely to (the di­rect + in­di­rect effects of) Eliezer, Nick Bostrom, and oth­ers who were pok­ing around the gen­eral as­piring ra­tio­nal­ity and effec­tive al­tru­ist space in 2008 or so, with sig­nifi­cant help from the ex­tended com­mu­ni­ties that even­tu­ally grew up around this space. More con­tro­ver­sially, it seems to me that this set of peo­ple has prob­a­bly (though not in­du­bitably) helped with lo­cat­ing spe­cific an­gles of trac­tion around these prob­lems that are worth pur­su­ing; with lo­cat­ing other an­gles on ex­is­ten­tial risk; and with lo­cat­ing tech­niques for fore­cast­ing/​pre­dic­tion (e.g., there seems to be similar­ity be­tween the tech­niques already be­ing prac­ticed in this com­mu­nity, and those Philip Tet­lock doc­u­mented as work­ing).
[2] Again, it may seem some­what hubris­tic to claim that that a rel­a­tively small com­mu­nity can use­fully add to the world’s anal­y­sis across a broad ar­ray of top­ics (such as the summed top­ics that bear on “How do we cre­ate an ex­is­ten­tial win?”). But it is gen­er­ally small­ish groups (rather than widely dis­persed mil­lions of peo­ple) that can ac­tu­ally bring anal­y­sis to­gether; his­tory has of­ten in­volved rel­a­tively small in­tel­lec­tual cir­cles that make con­certed progress; and even if things are already known that bear on how to cre­ate an ex­is­ten­tial win, one must prob­a­bly still com­bine and syn­the­size that un­der­stand­ing into a small­ish set of peo­ple that can ap­ply the un­der­stand­ing to AI (or what have you).
It seems worth a se­ri­ous try to see if we can be­come (or con­tinue to be) such an in­tel­lec­tu­ally gen­er­a­tive cir­cle; and it seems worth ask­ing what in­sti­tu­tions (such as a shared blog­ging plat­form) may in­crease our suc­cess odds.
[3] I am cu­ri­ous whether Ar­bital may be­come use­ful in this way; mak­ing con­ver­sa­tion and de­bate work well seems to be near their cen­tral mis­sion. The Effec­tive Altru­ism Fo­rum is an­other plau­si­ble can­di­date, but I find my­self sub­stan­tially more ex­cited about Less Wrong in this re­gard; it seems to me one must be free to speak about a broad ar­ray of top­ics to suc­ceed, and this feels eas­ier to do here. The pres­ence and easy link­a­bil­ity of Eliezer’s Less Wrong Se­quences also seems like an ad­van­tage of LW.
Thanks to Michael Arc (formerly Michael Vas­sar) and Davis Kingsley for push­ing this/​re­lated points in con­ver­sa­tion.