LW2.0: Community, Culture, and Intellectual Progress

This post pre­sents one lens I (Ruby) use to think about LessWrong 2.0 and what we’re try­ing to ac­com­plish. While it does not cap­ture ev­ery­thing im­por­tant, it does cap­ture much and ex­plains a lit­tle how our dis­parate-seem­ing pro­jects can com­bine into a sin­gle co­her­ent vi­sion.

I de­scribe a com­pli­men­tary lens in LessWrong 2.0: Tech­nol­ogy Plat­form for In­tel­lec­tual Progress.

(While the stated pur­pose of LessWrong is to be a place to learn and ap­ply ra­tio­nal­ity, as with any min­i­mally speci­fied goal, we could go about pur­su­ing this goal in mul­ti­ple ways. In prac­tice, my­self and other mem­bers of the LessWrong team care about in­tel­lec­tual progress, truth, ex­is­ten­tial risk, and the far-fu­ture and these broader goals drive our vi­sions and choices for LessWrong.)

A Goal for LessWrong

Here is one goal that it might make sense to me for LessWrong to have:

A goal of LessWrong is to grow and sus­tain a com­mu­nity of al­igned* mem­bers who are well-trained and well-equipped with the right tools and com­mu­nity in­fras­truc­ture to make progress on the biggest prob­lems fac­ing hu­man­ity, with a spe­cial fo­cus on the in­tel­lec­tual prob­lems.

*shar­ing our broad val­ues of im­prov­ing the world, en­sur­ing the long-term fu­ture is good, etc.

Things not well-ex­pressed in this goal:

  • LessWrong’s core fo­cus on ra­tio­nal­ity and be­liev­ing true things.

  • What LessWrong aims to provide users.

(Th­ese are bet­ter ex­pressed in the About/​Wel­come page.)

I want to point out that while the above might be a goal of the LessWrong team, that doesn’t mean it has to be a goal of our users. I whole­heart­edly wel­come users who come to LessWrong for their own pur­poses such as im­prov­ing their per­sonal ra­tio­nal­ity, learn­ing in­ter­est­ing things, get­ting feed­back on their ideas, be­ing en­ter­tained by stim­u­lat­ing ideas, or par­ti­ci­pat­ing so­cially in a com­mu­nity they like.

My RTC-P Framework

The rea­son I like the goal ex­pressed above is that pro­vides unity to a wide range of ac­tivi­ties we de­vote re­sources to. I like to group those ac­tivi­ties into four over­ar­ch­ing cat­e­gories.

Re­cruit­ment: at­tract­ing new al­igned and ca­pa­ble mem­bers to our com­mu­nity (and cre­at­ing a fun­nel into EA orgs and pro­jects)

Train­ing: pro­vid­ing means both new and ex­ist­ing mem­bers im­prove their skills, knowl­edge, and effec­tive­ness.

Com­mu­nity/​Cul­ture: we aim to im­prove com­mu­nity health and flour­ish­ing via means such en­courag­ing good epistemic norms, af­for­dance to in­ter­act, e.g. mee­tups and con­fer­ences.

[In­tel­lec­tual] Progress: we work to provide the LW plat­form and other tools that as­sist com­mu­nity mem­bers to con­tribute progress on the challeng­ing in­tel­lec­tual prob­lems we face.

Recruitment

LessWrong has a non-triv­ial pres­ence on the In­ter­net. In the last 12 months, LessWrong has seen on av­er­age over 100k vis­i­tors each month [1]. Ad­mit­tedly, many of these ar­rive due to low-rele­vance Google searches, how­ever over 25k each month are nav­i­gat­ing di­rectly to the less­wrong.com do­main. Depend­ing on the month, sev­eral hun­dred to sev­eral thou­sand ar­rive each from SlateS­tarCodex and Hacker News. There are sev­eral hun­dred to a thou­sand unique pageviews of the open­ings posts of Ra­tion­al­ity: A-Z each month. Around 500 more views of the first chap­ter of HPMOR.

The mean­ing of this is that a rel­a­tively large num­ber of peo­ple are prob­a­bly be­ing ex­posed to the ideas of the LessWrong, Ra­tion­al­ity, and Effec­tive Altru­ism com­mu­ni­ties for the first time when they en­counter LessWrong. LessWrong has the op­por­tu­nity to spread its ideas, but more im­por­tantly, there is scope here for us to on­board new ca­pa­ble and al­igned peo­ple in our com­mu­nity.

The team has re­cently been build­ing things to help new vis­i­tors have an ex­pe­rience con­ducive to be­com­ing a mem­ber of our com­mu­nity. The home­page has been re­designed to pre­sent recom­men­da­tions of our core read­ings and other top posts to new users. We’ve also writ­ten a new wel­come post and FAQ cov­er­ing how the site works, what it’s about, and how to get up to speed.

Some­thing else we might do is start pos­ing con­tent out­side of LessWrong to make peo­ple aware of what is on the site. We could cre­ate a “newslet­ter” col­lec­tion of con­tent (a mix of the best re­cent and clas­sic posts) and share this via a Face­book page, rele­vant places on Red­dit, Twit­ter, etc. This might also help us draw back some past users who dropped off dur­ing the great de­cline of 2015-2016.

Of course, re­cruit­ment doesn’t con­sist solely of your first mo­ments be­ing ex­posed on­line. There is a “fun­nel” as you progress through get­ting up to speed on the com­mu­nity’s knowl­edge and cul­ture, your first ex­pe­riences en­gag­ing with peo­ple on the site (your first com­ments and posts), at­tend­ing in-per­son mee­tups (these were sig­nifi­cant for me), and so on. Th­ese are all steps by which some­one be­comes part of our band try­ing to do good things.

In­deed, if want to be a com­mu­nity of peo­ple try­ing to do good, im­por­tant things, then it’s im­por­tant we have an ap­para­tus for hav­ing new peo­ple join. Re­cruit­ment. (I have the be­lief that if you are not grow­ing, at least some­what, then you are shrink­ing.) It’s not clear that we need to grow a lot, even 2x-5x might be suffi­cient. Cer­tainly we should not grow at the ex­pense of our cul­ture and val­ues.

For­tu­nately, LessWrong is a non­profit which helps with the in­cen­tives.

Training

LessWrong was not build to be solely a site for en­ter­tain­ment, recre­ation, or pas­sive read­ing. The goal was always to im­prove: to think bet­ter, know more, and ac­com­plish more. Tsuyoku Nar­i­tai is an em­ble­matic post. The con­cept of a ra­tio­nal­ity dojo caught on. The goal is to do bet­ter in­di­vi­d­u­ally and col­lec­tively.

LessWrong’s archive of ra­tio­nal­ity posts con­sti­tute con­sid­er­able train­ing ma­te­rial. LessWrong has over 23k posts with non-nega­tive karma scores. Note­wor­thy au­thors in­clude Eliezer_Yud­kowsky (1021 posts) , Scott Alexan­der (230), Luke­prog (416), Kaj_So­tala (221), An­naSala­mon (68), So8res (51), Academian (37), and many oth­ers.

Effort has been in­vested to cre­ate eas­ily ac­cessible se­quences of posts with con­ve­nience fea­tures such as Pre­vi­ous/​Next but­tons and Con­tinue Read­ing sug­ges­tions on the home­page. The re­cently launched recom­men­da­tion sys­tem sug­gests to users which posts they might be in­ter­ested in and benefit from. For ex­am­ple, here are some clas­sics you might have missed:

You can see the list of the ten most up­voted LessWrong posts for each year 2010-2018. We hope to have users not just read­ing re­cent con­tent, but our most valuable con­tent from all time.

Another idea, costly to im­ple­ment, but of which the team is fond of is to build­ing LessWrong into a “text­book” where there are ex­er­cises which in­crease com­pre­hen­sion and re­ten­tion.

And, al­though it doesn’t hap­pen on the site, I’d count all the ra­tio­nal­ity prac­tice that LessWrong mem­bers get to­gether and perform at their in-per­son mee­tups as part of the train­ing caused by LessWrong. There are 101 posts on LessWrong with dojo in the ti­tle, mark­ing them as mee­tups where peo­ple in­tended to get to­gether and prac­tice. Sam­pling from those posts, these are mee­tups where peo­ple worked on cal­ibra­tion, urge prop­a­ga­tion, non-vi­o­lent com­mu­ni­ca­tion, growth mind­set, statis­tics, Bayesian Rea­son­ing, In­tu­itive Bayes, Ide­olog­i­cal Tur­ing Tests, difficult con­ver­sa­tions, Ham­ming prompts, stress, and mem­ory.

Com­mu­nity & Culture

As on­line fo­rum, LessWrong nat­u­rally forms a com­mu­nity whose mem­bers share a cul­ture and ex­change ideas. Beyond the on­line, the LessWrong has caused many in-per­son, offline com­mu­ni­ties to ex­ists. In the past year, there were LessWrong mee­tups in thirty one coun­tries. There have been ~3,576 mee­tups made on LessWrong (238 of these were SSC or LW/​SSC com­bined mee­tups). Notably, the Bay Area Ra­tion­al­ist com­mu­nity ex­ists in large part due to LessWrong, even if it is now some­what sep­a­rate. Other com­mu­ni­ties which have his­tor­i­cally gained no­tice are the New York, Seat­tle, and Melbourne groups. Large area “mega-mee­tups” have been held in the US East Coast, Canada, Aus­tralia, and Europe. There is a thriv­ing com­mu­nity ra­tio­nal­ist/​LessWrong com­mu­nity in Moscow.

Even with in-per­son com­mu­ni­ties already ex­ist­ing, I still see plenty of place for LessWrong to con­tinue to bolster both on­line and offline com­mu­nity. For the offline world, we could provide sup­port ma­te­ri­als as in the past, fund­ing (as CEA does to EA lo­cal groups), fur­ther our meetup co­or­di­na­tion in­fras­truc­ture, or host LessWrong con­fer­ences.

Cul­turally, LessWrong is defined by its epistemic norms, fo­cus on truth, and open­ness to un­con­ven­tional ideas. The com­mu­nity share a dis­tinc­tive body of knowl­edge and set of tools for think­ing clearly. The core cul­ture was es­tab­lished by Eliezer’s Se­quences shap­ing the ap­proach to be­lief, rea­son, ex­pla­na­tion, truth and ev­i­dence, the use of lan­guage, and the prac­tice of chang­ing one’s mind. As far as I know, the com­mit­ment to clear think­ing and good com­mu­ni­ca­tion pre­sent think­ing on LessWrong is un­par­alleled by that on any other pub­lic place on the In­ter­net.

A goal of LessWrong is keep­ing this cul­ture strong and be­ing wary of any changes which could dilute this most valuable as­pect of our com­mu­nity, e.g. pro­mot­ing growth with­out en­sur­ing new mem­bers are prop­erly in­cul­turated.

At pre­sent, stan­dards are in part be­ing kept high by ac­tive and care­ful mod­er­a­tion. Some may note that the dis­cus­sion on LessWrong presently more con­struc­tive and civil in tone than at times in the past. This is ev­i­dent when look­ing at the style of com­menter GPT2 com­pared to those it was con­vers­ing with. GPT2, trained on the en­tire his­tor­i­cal com­ment cor­pus, has a no­tice­ably more con­de­scend­ing and con­trar­ian tone than the com­ments typ­i­cal on mod­ern LessWrong.

In­tel­lec­tual Progress

This cat­e­gory is ar­guably too broad, but that per­haps cap­tures the fact that LessWrong 2.0 is open to quite a wide range of pro­jects in the pur­suit of fur­ther in­tel­lec­tual progress, (or even just progress, in­tel­lec­tual or oth­er­wise).

The LessWrong fo­rum with posts, com­ments, and votes is already a tech­nol­ogy for in­tel­lec­tual progress which al­lows thinkers to share ideas, get feed­back, and build upon each oth­ers work. The team spends a lot of time think­ing about what we could build to help peo­ple be more in­tel­lec­tu­ally gen­er­a­tive. The team has on­go­ing de­bates about what the sec­tions of the site should be (“bring­ing back some­thing like Main vs Dis­cus­sion?” “Ah, but the prob­lems!”), whether and how to pro­mote the shar­ing of un­pol­ished ideas (short­form feeds? these have been gain­ing in pop­u­lar­ity with­out ex­plicit sup­port), or can we set up our own peer re­view pro­cess. Heart­en­ingly, the goal is always to gen­er­ate more “good con­tent”, not merely to drive-up con­tent pro­duc­tion and ac­tivity. Growth, if not ex­actly feared, is viewed with sus­pi­cion—per­haps it will dilute qual­ity. Already some on the team fear that things trend too much to­wards in­sight porn rather than sub­stan­tive con­tri­bu­tions to the in­tel­lec­tual com­mons.

It’s worth not­ing that the LessWrong 2.0 team in­vests efforts to pro­mote in­tel­lec­tual progress out­side of the less­wrong.com do­main. The Effec­tive Altru­ism Fo­rum runs on the LessWrong code­base (and re­ceives some sup­port from the team). And last year the LessWrong team launched the AI Align­ment Fo­rum: a fo­rum speci­fi­cally for ded­i­cated AI safety re­searchers. (It is no se­cret the LessWrong 2.0 team mem­bers es­pe­cially wish to see progress made on the in­tel­lec­tual prob­lems of AI safety.)

One of the ideas for in­creas­ing in­tel­lec­tual progress which the team has been es­pe­cially oc­cu­pied with re­cently is that of an Open Ques­tions plat­form. One of many func­tions, such a plat­form would be place where the com­mu­nity co­or­di­nates on which prob­lems are most im­por­tant and cre­ates sur­face area so more re­searchers can con­tribute.

Other ideas for things the LessWrong 2.0 team could build to drive in­tel­lec­tual progress are: an op­ti­mized col­lab­o­ra­tive tool (like Google Docs, but bet­ter); a mar­ket­place for in­tel­lec­tual la­bor (think Craigslist/​TaskRab­bit), a pre­dic­tion mar­ket plat­form, and a re­searcher train­ing pro­gram. I have writ­ten more about these ideas in LW2.0: Tech­nol­ogy Plat­form for In­tel­lec­tual Progress.