LW 2.0 Strategic Overview

Up­date: We’re in open beta! At this point you will be able to sign up /​ lo­gin with your LW 1.0 ac­counts (if the lat­ter, we did not copy over your pass­words, so hit “for­got pass­word” to re­ceive a pass­word-re­set email).

Hey Every­one!

This is the post for dis­cussing the vi­sion that I and the rest of the LessWrong 2.0 team have for the new ver­sion of LessWrong, and to just gen­er­ally bring all of you up to speed with the plans for the site. This post has been over­due for a while, but I was busy cod­ing on LessWrong 2.0, and I am my­self not that great of a writer, which means writ­ing things like this takes quite a long time for me, and so this ended up be­ing de­layed a few times. I apol­o­gize for that.

With Vaniver’s sup­port, I’ve been the pri­mary per­son work­ing on LessWrong 2.0 for the last 4 months, spend­ing most of my time cod­ing while also talk­ing to var­i­ous au­thors in the com­mu­nity, do­ing dozens of user-in­ter­views and gen­er­ally try­ing to figure out how to make LessWrong 2.0 a suc­cess. Along the way I’ve had sup­port from many peo­ple, in­clud­ing Vaniver him­self who is pro­vid­ing part-time sup­port from MIRI, Eric Rogstad who helped me get off the ground with the ar­chi­tec­ture and in­fras­truc­ture for the web­site, Har­manas Cho­pra who helped build our Karma sys­tem and did a lot of user-in­ter­views with me, Rae­mon who is do­ing part-time web-de­vel­op­ment work for the pro­ject, and Ben Pace who helped me write this post and is ba­si­cally co-run­ning the pro­ject with me (and will con­tinue to do so for the fore­see­able fu­ture).

We are run­ning on char­i­ta­ble dona­tions, with $80k in fund­ing from CEA in the form of an EA grant and $10k in dona­tions from Eric Rogstad, which will go to salaries and var­i­ous main­te­nance costs. We are plan­ning to con­tinue run­ning this whole pro­ject on dona­tions for the fore­see­able fu­ture, and legally this is a pro­ject of CFAR, which helps us a bunch with ac­count­ing and al­lows peo­ple to get tax benefits from giv­ing us money.

Now that the lo­gis­tics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key as­sump­tions in de­sign­ing the site, what does this mean for the cur­rent LessWrong site, and what should we as a com­mu­nity dis­cuss more to make sure the new site is a suc­cess?

Here’s the rough struc­ture of this post:

  • My per­spec­tive on why LessWrong 2.0 is a pro­ject worth pursuing

  • A sum­mary of the ex­ist­ing dis­cus­sion around LessWrong 2.0

  • The mod­els that I’ve been us­ing to make de­ci­sions for the de­sign of the new site, and some of the re­sult­ing de­sign decisions

  • A set of open ques­tions to dis­cuss in the com­ments where I ex­pect com­mu­nity in­put/​dis­cus­sion to be par­tic­u­larly fruit­ful

Why bother with LessWrong 2.0?

I feel that in­de­pen­dently of how many things were and are wrong with the site and its cul­ture, over­all, over the course of its his­tory, it has been one of the few places in the world that I know off where a spark of real dis­cus­sion has hap­pened, and where some real in­tel­lec­tual progress on ac­tu­ally im­por­tant prob­lems was made. So let me be­gin with a sum­mary of things that I think the old LessWrong got right, that are es­sen­tial to pre­serve in any new ver­sion of the site:

On LessWrong…

  • I can con­tribute to in­tel­lec­tual progress, even with­out for­mal cre­den­tials

  • I can some­times have dis­cus­sions in which the par­ti­ci­pants fo­cus on try­ing to con­vey their true rea­sons for be­liev­ing some­thing, as op­posed to rhetor­i­cally us­ing all the ar­gu­ments that sup­port their po­si­tion in­de­pen­dent of whether those have any bear­ing on their belief

  • I can talk about my men­tal ex­pe­riences in a broad way, such that my per­sonal ob­ser­va­tions, sci­en­tific ev­i­dence and re­pro­ducible ex­per­i­ments are all taken into ac­count and given proper weight­ing. There is no nar­row method­ol­ogy I need to con­form to to have my claims taken se­ri­ously.

  • I can have con­ver­sa­tions about al­most all as­pects of re­al­ity, in­de­pen­dently of what liter­ary genre they are as­so­ci­ated with or sci­en­tific dis­ci­pline they fall into, as long as they seem rele­vant to the larger prob­lems the com­mu­nity cares about

  • I am sur­rounded by peo­ple who are knowl­edge­able in a wide range of fields and dis­ci­plines, who take the virtue of schol­ar­ship se­ri­ously, and who are in­ter­ested and cu­ri­ous about learn­ing things that are out­side of their cur­rent area of expertise

  • We have a set of non-poli­ti­cal shared goals for which many of us are will­ing to make sig­nifi­cant per­sonal sacrifices

  • I can post long-form con­tent that takes up as much space at it needs to, and can ex­pect a rea­son­ably high level of pa­tience of my read­ers in try­ing to un­der­stand my be­liefs and arguments

  • Con­tent that I am post­ing on the site gets archived, is search­able and of­ten gets refer­enced in other peo­ple’s writ­ing, and if my con­tent is good enough, can even be­come com­mon knowl­edge in the com­mu­nity at large

  • The av­er­age com­pe­tence and in­tel­li­gence on the site is high, which al­lows dis­cus­sion to gen­er­ally hap­pen on a high level and al­lows peo­ple to make com­pli­cated ar­gu­ments and get taken seriously

  • There is a body of writ­ing that is gen­er­ally as­sumed to have been read by most peo­ple par­ti­ci­pat­ing in dis­cus­sions that es­tab­lishes philo­soph­i­cal, so­cial and epistemic prin­ci­ples that serve as a foun­da­tion for fu­ture progress (cur­rently that body of writ­ing largely con­sists of the Se­quences, but also in­cludes some of Scott’s writ­ing, some of Luke’s writ­ing and some in­di­vi­d­ual posts by other au­thors)

When mak­ing changes to LessWrong, I think it is very im­por­tant to pre­serve all of the above fea­tures. I don’t think all of them are uni­ver­sally pre­sent on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even re­motely close to hav­ing all of them as of­ten as LessWrong has. Those fea­tures are what mo­ti­vated me to make LessWrong 2.0 hap­pen, and set the frame for think­ing about the mod­els and per­spec­tives I will out­line in the rest of the post.

I also think Anna, in her post about the im­por­tance of a sin­gle con­ver­sa­tional lo­cus, says an­other, some­what broader thing, that is very im­por­tant to me, so I’ve copied it in here:

1. The world is locked right now in a deadly puz­zle, and needs some­thing like a mir­a­cle of good thought if it is to have the sur­vival odds one might wish the world to have.
2. De­spite all pri­ors and ap­pear­ances, our lit­tle com­mu­nity (the “as­piring ra­tio­nal­ity” com­mu­nity; the “effec­tive al­tru­ist” pro­ject; efforts to cre­ate an ex­is­ten­tial win; etc.) has a shot at se­ri­ously helping with this puz­zle. This sounds like hubris, but it is at this point at least par­tially a mat­ter of track record.
3. To aid in solv­ing this puz­zle, we must prob­a­bly find a way to think to­gether, ac­cu­mu­la­tively. We need to think about tech­ni­cal prob­lems in AI safety, but also about the full sur­round­ing con­text—ev­ery­thing to do with un­der­stand­ing what the heck kind of a place the world is, such that that kind of place may con­tain cheat codes and trap doors to­ward achiev­ing an ex­is­ten­tial win. We prob­a­bly also need to think about “ways of think­ing”—both the in­di­vi­d­ual think­ing skills, and the com­mu­nity con­ver­sa­tional norms, that can cause our puz­zle-solv­ing to work bet­ter.
4. One fea­ture that is pretty helpful here, is if we some­how main­tain a sin­gle “con­ver­sa­tion”, rather than a bunch of peo­ple sep­a­rately hav­ing thoughts and some­times tak­ing in­spira­tion from one an­other. By “a con­ver­sa­tion”, I mean a space where peo­ple can e.g. re­ply to one an­other; rely on shared jar­gon/​short­hand/​con­cepts; build on ar­gu­ments that have been es­tab­lished in com­mon as prob­a­bly-valid; point out ap­par­ent er­rors and then have that point­ing-out be ac­tu­ally taken into ac­count or else replied-to).
5. One fea­ture that re­ally helps things be “a con­ver­sa­tion” in this way, is if there is a sin­gle Schel­ling set of posts/​etc. that peo­ple (in the rele­vant com­mu­nity/​con­ver­sa­tion) are sup­posed to read, and can be as­sumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly de­sir­able to form a new such place if we can.
6. We have lately ceased to have a “sin­gle con­ver­sa­tion” in this way. Good con­tent is still be­ing pro­duced across these com­mu­ni­ties, but there is no sin­gle lo­cus of con­ver­sa­tion, such that if you’re in a gath­er­ing of e.g. five as­piring ra­tio­nal­ists, you can take for granted that of course ev­ery­one has read posts such-and-such. There is no one place you can post to, where, if enough peo­ple up­vote your writ­ing, peo­ple will re­li­ably read and re­spond (rather than ig­nore), and where oth­ers will call them out if they later post rea­son­ing that ig­nores your ev­i­dence. Without such a lo­cus, it is hard for con­ver­sa­tion to build in the cor­rect way. (And hard for it to turn into ar­gu­ments and replies, rather than a se­ries of non se­quiturs.)

The Ex­ist­ing Dis­cus­sion Around LessWrong 2.0

Now that I’ve given a bit of con­text on why I think LessWrong 2.0 is an im­por­tant pro­ject, it seems sen­si­ble to look at what has been said so far, so we don’t have to re­peat the same dis­cus­sions over and over again. There has already been a lot of dis­cus­sion about the de­cline of LessWrong, the need for a new plat­form and the de­sign of LessWrong 2.0, and I won’t be able to sum­marise it all here, but I can try my best to sum­ma­rize the most im­por­tant points, and give a bit of my own per­spec­tive on them.

Here is a com­ment by Alexan­dros, on Anna’s post I quoted above:

Please con­sider a few grem­lins that are weigh­ing down LW cur­rently:
1. Eliezer’s ghost—He set the cul­ture of the place, his posts are cen­tral ma­te­rial, has punc­tu­ated its ex­is­tence with his ex­plo­sions (and re­fusal to apol­o­gise), and then, upped and left the com­mu­nity, with­out ac­tu­ally ac­knowl­edg­ing that his ex­per­i­ment (well kept gar­dens etc) has failed. As far as I know he is still the “owner” of this web­site, re­tains ul­ti­mate veto on a bunch of stuff, etc. If that has changed, there is no clar­ity on who the owner is (I see three lo­gos on the top ban­ner, is it them?), who the mod­er­a­tors are, who is work­ing on it in gen­eral. I know tri­cy­cle are helping with de­vel­op­ment, but a part-time team is only marginally bet­ter than no-team, and at least no-team is an in­vi­ta­tion for a team to step up.
...I con­sider Alexei’s hints that Ar­bital is “work­ing on some­thing” to be a re­ally bad idea, though I recog­nise the good in­ten­tion. Efforts like this need crit­i­cal mass and clar­ity, and diffus­ing yet an­other wave of peo­ple want­ing to do some­thing about LW with vague promises of some­thing nice in the fu­ture… is ex­actly what I would do if I wanted to main­tain the sta­tus quo for a few more years.
Any se­ri­ous at­tempt at re­vi­tal­is­ing less­wrong.com should fo­cus on defin­ing own­er­ship and plan clearly. A post by EY him­self recog­nis­ing that his vi­sion for lw 1.0 failed and pass­ing the bat­ton to a gen­er­ally-ac­cepted BDFL would be nice, but i’m not hold­ing my breath. Fur­ther, I am fairly cer­tain that LW as a com­mu­nity blog is bound to fail. Strong writ­ers en­joy their in­de­pen­dence. LW as an ag­gre­ga­tor-first (with per­haps abil­ity to host con­tent if peo­ple wish to, like hn) is fine. HN may have de­graded over time, but much less so than LW, and we should be able to im­prove on their pat­tern.
I think if you want to unify the com­mu­nity, what needs to be done is the cre­ation of a hn-style ag­gre­ga­tor, with a clear, ac­cepted, will­ing, opinionated, in­volved BDFL, in­put from the promi­nent writ­ers in the com­mu­nity (scott, robin, eliezer, nick bostrom, oth­ers), and for the cur­rent less­wrong.com to be archived in favour of that new ag­gre­ga­tor. But even if it’s some­thing else, it will not suc­ceed with­out the three ba­sic in­gre­di­ents: clear own­er­ship, ded­i­cated lead­er­ship, and as broad sup­port as pos­si­ble to a sim­ple, well-ar­tic­u­lated vi­sion. Less­wrong tried to be too many things with too lit­tle in the way of back­ing.

I think Alexan­dros hits a lot of good points here, and luck­ily these are ac­tu­ally some of the prob­lems I am most con­fi­dent we have solved. The biggest bot­tle­neck – the thing that I think caused most other prob­lems with LessWrong – is sim­ply that there was no­body with the mo­ti­va­tion, the man­date and the re­sources to fight against the in­evitable de­cline into en­tropy. I feel that the cor­rect re­sponse to the ques­tion of “why did LessWrong de­cline?” is to ask “why should it have suc­ceeded?”.

In the ab­sence of any­one with the man­date try­ing to fix all the prob­lems that nat­u­rally arise, we should ex­pect any on­line plat­form to de­cline. Most of the prob­lems that will be cov­ered in the rest of this post are things that could have been fixed many years ago, but sim­ply weren’t be­cause no­body with the man­date put much re­sources into fix­ing them. I think the cause for this was a diffu­sion of re­spon­si­bil­ity, and a lot of vague promises of prob­lems get­ting solved by vague pro­jects in the fu­ture. I my­self put off work­ing on LessWrong for a few months be­cause I had some vague sense that Ar­bital would solve the prob­lems that I was hop­ing to solve, even though Ar­bital never re­ally promised to solve them. Then Ar­bital’s plan ended up not work­ing out, and I had wasted months of pre­cious time.

Since this com­ment was writ­ten, Vaniver has been some­what unan­i­mously de­clared benev­olent dic­ta­tor for life of LessWrong. He and I have got­ten var­i­ous stake­hold­ers on board, re­ceived fund­ing, have a vi­sion, and have free time – and so we have the man­date, the re­sources and the mo­ti­va­tion to not make the same mis­takes. With our new code­base, link posts are now some­thing I can build in an af­ter­noon, rather than some­thing that re­quires three weeks of get­ting per­mis­sions from var­i­ous stake­hold­ers, perform­ing com­pli­cated open-source and con­fi­den­tial­ity rit­u­als, and hiring a new con­trac­tor who has to first un­der­stand the mys­te­ri­ous Red­dit fork from 2008 that LessWrong is based on. This means at least the prob­lem of diffu­sion of re­spon­si­bil­ity is solved.

Scott Alexan­der also made a re­cent com­ment on Red­dit on why he thinks LessWrong de­clined, and why he is some­what skep­ti­cal of at­tempts to re­vive the web­site:

1. Eliezer had a lot of weird and vary­ing in­ter­ests, but one of his tal­ents was mak­ing them all come to­gether so you felt like at the root they were all part of this same deep philos­o­phy. This didn’t work for other peo­ple, and so we ended up with some peo­ple be­ing am­a­teur de­ci­sion the­ory math­e­mat­i­ci­ans, and other peo­ple be­ing wannabe self-help gu­rus, and still other peo­ple com­ing up with their own the­o­ries of ethics or meta­physics or some­thing. And when Eliezer did any of those things, some­how it would be in­ter­est­ing to ev­ery­one and we would re­al­ize the deep con­nec­tions be­tween de­ci­sion the­ory and meta­physics and self-help. And when other peo­ple did it, it was just “why am I read­ing this ran­dom bul­letin board full of stuff I’m not in­ter­ested in?”
2. Another of Eliezer’s tal­ents was care­fully skirt­ing the line be­tween “so main­stream as to be bor­ing” and “so wacky as to be an ob­vi­ous crack­pot”. Most peo­ple couldn’t skirt that line, and so ended up ei­ther bor­ing, or ob­vi­ous crack­pots. This pro­duced a lot of back­lash, like “we need to be less bor­ing!” or “we need fewer crack­pots!”, and even though both of these were true, it pretty much meant that what­ever you posted, some­one would be com­plain­ing that you were bad.
3. All the fields Eliezer wrote in are crack­pot-bait and do ring a bunch of crack­pot alarms. I’m not just talk­ing about AI—I’m talk­ing about self-help, about the prob­lems with the aca­demic es­tab­lish­ment, et cetera. I think Eliezer re­ally did have in­ter­est­ing things to say about them—but 90% of peo­ple who try to wade into those fields will just end up be­ing ac­tual crack­pots, in the bor­ing sense. And 90% of the peo­ple who aren’t will be re­ally bad at not seem­ing like crack­pots. So there was enough kind of woo type stuff that it be­came sort of em­barass­ing to be seen there, es­pe­cially given the thing where half or a quar­ter of the peo­ple there or what­ever just want to dis­cuss weird branches of math or what­ever.
4. Com­mu­ni­ties have an un­for­tu­nate ten­dency to be­come par­o­dies of them­selves, and LW ended up with a lot of peo­ple (re­al­is­ti­cally, prob­a­bly 14 years old) who tended to post things like “Let’s use Bayes to hack our util­ity func­tions to get su­per­fuzzies in a group house!”. Some­times the stuff they were post­ing about made sense on its own, but it was still kind of awk­ward and the sort of stuff peo­ple felt em­barassed be­ing seen next to.
5. All of these prob­lems were ex­ac­er­bated by the com­mu­nity be­ing an awk­ward com­bi­na­tion of Google en­g­ineers with physics PhDs and three star­tups on one hand, and con­fused 140 IQ autis­tic 14 year olds who didn’t fit in at school and de­cided that this was Their Tribe Now on the other. The low­est com­mon de­nom­i­na­tor that ap­peals to both those groups is pretty low.
6. There was a norm against poli­tics, but it wasn’t a very well-spel­led-out norm, and no­body en­forced it very well. So we would get the oc­ca­sional leftist who had just dis­cov­ered so­cial jus­tice and wanted to ex­plain to us how pa­tri­archy was the real un­friendly AI, the oc­ca­sional rightist who had just dis­cov­ered HBD and wanted to go on a Gal­ileo-style cru­sade against the de­cep­tive es­tab­lish­ment, and ev­ery­one else just want­ing to dis­cuss self-help or de­ci­sion-the­ory or what­ever with­out the en­tire com­mu­nity be­com­ing a toxic out­cast pariah hel­l­hole. Also, this one proto-alt-right guy named Eu­gene Nier found ways to ex­ploit the karma sys­tem to mess with any­one who didn’t like the alt-right (ie 98% of the com­mu­nity) and the mod­er­a­tion sys­tem wasn’t good enough to let any­one do any­thing about it.
7. There was an ill-defined differ­ence be­tween Dis­cus­sion (low-effort ran­dom posts) and Main (high-effort im­por­tant posts you wanted to show off). But be­cause all these other prob­lems made it con­fus­ing and con­tro­ver­sial to post any­thing at all, no­body was con­fi­dent enough to post in Main, and so ev­ery­thing ended up in a low-effort-ran­dom-post bin that wasn’t re­ally de­signed to mat­ter. And some­times the only peo­ple who did­post in Main were peo­ple who were too clue­less about com­mu­nity norms to care, and then their posts be­came the ones that got high­lighted to the en­tire com­mu­nity.
8. Be­cause of all of these things, Less Wrong got a rep­u­ta­tion within the ra­tio­nal­ist com­mu­nity as a bad place to post, and all of the cool peo­ple got their own blogs, or went to Tum­blr, or went to Face­book, or did a whole bunch of things that re­lied on illeg­ible lo­cal knowl­edge. Mean­while, LW it­self was still a big glow­ing bea­con for clue­less new­bies. So we ended up with an ac­ci­den­tal norm that only clue­less new­bies posted on LW, which just re­in­forced the “stay off LW” vibe.
I worry that all the ex­ist­ing “re­s­ur­rect LW” pro­jects, in­clud­ing some re­ally high-effort ones, have been at­tempts to break co­in­ci­den­tal vi­cious cy­cles—ie deal with 8 and the sec­ond half of 7. I think they’re ig­nor­ing points 1 through 6, which is go­ing to doom them.

At least judg­ing from where my efforts went, I would agree that I have spent a pretty sig­nifi­cant amount of re­sources on fix­ing the prob­lems that Scott de­scribed in point 6 and 7, but I also spent about equal time think­ing about how to fix 1-5. The broader per­spec­tive that I have on those lat­ter points is I think best illus­trated in an anal­ogy:

When I read Scott’s com­ments about how there was just a lot of em­bar­rass­ing and weird writ­ing on LessWrong, I re­mem­ber my ex­pe­riences as a Com­puter Science un­der­grad­u­ate. When the me­dian un­der­grad makes claims about the di­rec­tion of re­search in their field, or some other big claim about their field that isn’t ex­plic­itly taught in class, or if you ask an un­der­grad­u­ate physics stu­dent what they think about how to do physics re­search, or what ideas they have for im­prov­ing so­ciety, they will of­ten give you quite naive sound­ing an­swers (I have heard ev­ery­thing from “I am go­ing to build a we­bapp to per­ma­nently solve poli­ti­cal cor­rup­tion” to “here’s my idea of how we can trans­mit large amounts of en­ergy wire­lessly by us­ing low-fre­quency tesla-coils”.) I don’t think we should ex­pect any­thing differ­ent on LessWrong. I ac­tu­ally think we should ex­pect it to be worse here, since we are ac­tively en­courag­ing peo­ple to have opinions, as op­posed to the more stan­dard prac­tice of academia, which seems to con­sist of treat­ing un­der­grad­u­ates as slightly more in­tel­li­gent dogs that need to be con­di­tioned with the right mix­ture of calcu­lus home­work prob­lems and manda­tory class at­ten­dance, so that they might be given the right to have any opinion at all if they spend 6 more years get­ting their PhD.

So while I do think that Eliezer’s writ­ing en­couraged top­ics that were slightly more likely to at­tract crack­pots, I think a large chunk of the weird writ­ing is just a nat­u­ral con­se­quence of be­ing an in­tel­lec­tual com­mu­nity that has a some­what con­stant in­flux of new mem­bers.

And hav­ing un­der­grad­u­ates go through the phase where they have bad ideas, and then have it ex­plained to them why their ideas are bad, is im­por­tant. I ac­tu­ally think it’s key to learn­ing any topic more com­pli­cated than high-school math­e­mat­ics. It takes a long time un­til some­one can pro­duc­tively con­tribute to the in­tel­lec­tual progress of an in­tel­lec­tual com­mu­nity (in academia it’s at least 4 years, though usu­ally more like 8), and dur­ing all that pe­riod they will say very naive and silly sound­ing things (though less and less so as time pro­gresses). I think LessWrong can do sig­nifi­cantly bet­ter than 4 years, but we should still ex­pect that it will take new mem­bers time to ac­cli­mate and get used to how things work (based on user-in­ter­views of a lot of top com­menters it usu­ally took some­thing like 3-6 months un­til some­one felt com­fortable com­ment­ing fre­quently and about 6-8 months un­til some­one felt com­fortable post­ing fre­quently. This strikes me as a fairly rea­son­able ex­pec­ta­tion for the fu­ture).

And I do think that we have many grad­u­ate stu­dents and tenured pro­fes­sors of the ra­tio­nal­ity com­mu­nity who are not Eliezer, and who do not sound like crack­pots, that can speak rea­son­ably about the same top­ics Eliezer talked about, and who I feel are act­ing with a very similar fo­cus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shul­man, Anna Sala­mon, Sarah Con­stantin, Ben Hoff­man, Scott him­self and many more, most of whose writ­ing would fit very well on LessWrong (and of­ten still ends up there).

But all of this doesn’t mean what Scott de­scribes isn’t a prob­lem. It’s still a bad ex­pe­rience for ev­ery­one to con­stantly have to read through bad first year un­der­grad es­says, but I think the solu­tion can’t in­volve those es­says not get­ting writ­ten at all. In­stead it has to in­volve some kind of way of not forc­ing ev­ery­one to see those es­says, while still al­low­ing them to get pro­moted if some­one shows up who does write some­thing in­sight­ful from day one. I am cur­rently plan­ning to tackle this mostly with im­prove­ments to the karma sys­tem, as well as changes to the lay­out of the site, where users pri­mar­ily post to their own pro­files and can get con­tent pro­moted to the front­page by mod­er­a­tors and high-karma mem­bers. A feed con­sist­ing solely of con­tent of the qual­ity of the av­er­age Scott, Anna, Ben or Luke post would be an amaz­ing read, and is ex­actly the kind of feed I am hop­ing to cre­ate with LessWrong, while still al­low­ing users to en­gage with the rest of the con­tent on the site (more on that later).

I would very very roughly sum­ma­rize what Scott says in the first 5 points as two ma­jor failures: first a failure of sep­a­rat­ing the sig­nal from the noise, and sec­ond a failure of en­forc­ing mod­er­a­tion norms when peo­ple did turn out to be crack­pots or just un­able to pro­duc­tively en­gage with the ma­te­rial on the site. Both of which are nat­u­ral con­se­quences of the aban­don­ment of pro­mot­ing things to main, the fact that dis­cus­sion is or­dered by de­fault by re­cency and not by some kind of scor­ing sys­tem, and the fact that the mod­er­a­tion tools were com­pletely in­suffi­cient (but more on the de­tails of that in the next sec­tion)

My mod­els of LessWrong 2.0

I think there are three ma­jor bot­tle­necks that LessWrong is fac­ing (af­ter the ze­roth bot­tle­neck, which is just that no sin­gle group had the man­date, re­sources and mo­ti­va­tion to fix any of the prob­lems):

  1. We need to be able to build on each other’s in­tel­lec­tual con­tri­bu­tions, archive im­por­tant con­tent and avoid pri­mar­ily be­ing news-driven

  2. We need to im­prove the sig­nal-to-noise ra­tio for the av­er­age reader, and only broad­cast the most im­por­tant writing

  3. We need to ac­tively mod­er­ate in a way that is both fun for the mod­er­a­tors, and helps peo­ple avoid fu­ture mod­er­a­tion policy violations


The first bot­tle­neck for our com­mu­nity, and the biggest I think, is the abil­ity to build com­mon knowl­edge. On face­book, I can read an ex­cel­lent and in­sight­ful dis­cus­sion, yet one week later I for­got it. Even if I re­mem­ber it, I don’t link to the face­book post (be­cause link­ing to face­book posts/​com­ments is hard) and it doesn’t have a ti­tle so I don’t ca­su­ally re­fer to it in dis­cus­sion with friends. On face­book, ideas don’t get archived and built upon, they get dis­cussed and for­got­ten. To put this an­other way, the rea­son we can­not build on the best ideas this com­mu­nity had over the last five years, is be­cause we don’t know what they are. There’s only frag­ments of mem­o­ries of face­book dis­cus­sions which maybe some other peo­ple re­mem­ber. We have the se­quences, and there’s no way to build on them to­gether as a com­mu­nity, and thus there is stag­na­tion.

Con­trast this with sci­ence. Modern sci­ence is plagued by many se­vere prob­lems, but of hu­man­ity’s in­sti­tu­tions it has per­haps the strongest record of be­ing able to build suc­cess­fully on its pre­vi­ous ideas. The physics com­mu­nity has this sys­tem where the new ideas get put into jour­nals, and then even­tu­ally if they’re new, im­por­tant, and true, they get turned into text­books, which are then read by the up­com­ing gen­er­a­tion of physi­cists, who then write new pa­pers based on the find­ings in the text­books. All good sci­en­tific fields have good text­books, and your un­der­grad years are largely spent read­ing them. I think the ra­tio­nal­ity com­mu­nity has some text­books, writ­ten by Eliezer (and we also com­piled a col­lec­tion of Scott’s best posts that I hope will be­come an­other text­book of the com­mu­nity), but there is no ex­pec­ta­tion that if you write a good enough post/​pa­per that your con­tent will be in­cluded in the next gen­er­a­tion of those text­books, and the ex­ist­ing books we have rarely get up­dated. This makes the cur­rent state of the ra­tio­nal­ity com­mu­nity analo­gous to a hy­po­thet­i­cal state of physics, had physics no jour­nals, no text­book pub­lish­ers, and only one text­book that is about a decade old.

This seems to me what Anna is talk­ing about—the pur­pose of the sin­gle lo­cus of con­ver­sa­tion is the abil­ity to have com­mon knowl­edge and build on it. The goal is to have ev­ery in­ter­ac­tion with the new LessWrong feel like it is ei­ther helping you grow as a ra­tio­nal­ist or has you con­tribute to last­ing in­tel­lec­tual progress of the com­mu­nity. If you write some­thing good enough, it should en­ter the canon of the com­mu­nity. If you make a strong enough case against some ex­ist­ing piece of canon, you should be able to re­place or al­ter that canon. I want writ­ing to the new LessWrong to feel time­less.

To achieve this, we’ve built the fol­low­ing things:

  • We cre­ated a sec­tion for core canon on the site that is promi­nently fea­tured on the front­page and right now in­cludes Ra­tion­al­ity: A-Z, The Codex (a col­lec­tion of Scott’s best writ­ing, com­piled by Scott and us), and HPMOR. Over time I ex­pect these to change, and there is a good chance HPMOR will move to a differ­ent sec­tion of the site (I am con­sid­er­ing adding an “art and fic­tion” sec­tion) and will be re­placed by a new col­lec­tion rep­re­sent­ing new core ideas in the com­mu­nity.

  • Se­quences are now a core fea­ture of the web­site. Any user can cre­ate se­quences of their own and other users posts, and those se­quences them­selves can be voted and com­mented on. The goal is to help users com­pile the best writ­ing on the site, and make it so that good time­less writ­ing gets read by users for a long time, as op­posed to dis­ap­pear­ing into the void. Separat­ing cre­ative and cu­ra­to­rial effort al­lows the sort of pro­fes­sional spe­cial­iza­tion that you see in se­ri­ous sci­en­tific fields.

  • Of those se­quences, the most up­voted and most im­por­tant ones will be cho­sen to be promi­nently fea­tured on other sec­tions of the site, al­low­ing users easy ac­cess to read the best con­tent on the site and get up to speed with the cur­rent state of knowl­edge of the com­mu­nity.

  • For all posts and se­quences the site keeps track of how much of them you’ve read (in­clud­ing im­port­ing view-track­ing from old LessWrong, so you will get to see how much of the origi­nal se­quences you’ve ac­tu­ally read). And if you’ve read all of a se­quence you get a small badge that you can choose to dis­play right next to your user­name, which helps peo­ple nav­i­gate how much of the con­tent of the site you are fa­mil­iar with.

  • The de­sign of the core con­tent of the site (e.g. the Se­quences, the Codex, etc.) tries to com­mu­ni­cate a cer­tain per­ma­nence of con­tri­bu­tions. The aes­thetic feels in­ten­tion­ally book-like, which I hope gives peo­ple a sense that their con­tri­bu­tions will be archived, ac­cessible and built-upon.

  • One im­por­tant is­sue with this is that there also needs to be a space for sketches on LessWrong. To quote PaulGra­ham: “What made oil paint so ex­cit­ing, when it first be­came pop­u­lar in the fif­teenth cen­tury, was that you could ac­tu­ally make the finished work from the pro­to­type. You could make a pre­limi­nary draw­ing if you wanted to, but you weren’t held to it; you could work out all the de­tails, and even make ma­jor changes, as you finished the paint­ing.”

  • We do not want to dis­cour­age sketch-like con­tri­bu­tions, and want to build func­tion­al­ity that helps peo­ple build a finished work from a pro­to­type (this is one of the core com­pe­ten­cies of Google Docs, for ex­am­ple).

And there are some more fea­tures the team is hop­ing to build in this di­rec­tion, such as:

  • Easier archiv­ing of dis­cus­sions by al­low­ing dis­cus­sions to be turned into top-level posts (similar to what Ben Pace did with a re­cent Face­book dis­cus­sion be­tween Eliezer, Wei Dai, Stu­art Arm­strong, and some oth­ers, which he turned into a post on LessWrong 2.0)

  • The abil­ity to con­tinue read­ing the con­tent you’ve started read­ing with a sin­gle click from the front­page. Here’s an ex­am­ple logged-in front­page:


The sec­ond bot­tle­neck is im­prov­ing the sig­nal-to-noise ra­tio. It needs to be pos­si­ble for some­one to sub­scribe to only the best posts on LessWrong, and only the most im­por­tant con­tent needs to turned into com­mon-knowl­edge.

I think this is a lot of what Scott was point­ing at in his sum­mary about the de­cline of LessWrong. We need a way for peo­ple to learn from their mis­takes, while also not flood­ing the in­boxes of ev­ery­one else, and while giv­ing peo­ple ac­tive feed­back on how to im­prove in their writ­ing.

The site struc­ture:

To solve this bot­tle­neck, here is the rough con­tent struc­ture that I am cur­rently plan­ning to im­ple­ment on LessWrong:

The writ­ing ex­pe­rience:

If you write a post, it first shows up nowhere else but your per­sonal user page, which you can ba­si­cally think of be­ing a medium-style blog. If other users have sub­scribed to you, your post will then show up on their front­pages (or only show up af­ter it hit a cer­tain karma thresh­old, if users who sub­scribed to you set a min­i­mum karma thresh­old). If you have enough karma you can de­cide to pro­mote your con­tent to the main front­page feed (where ev­ery­one will see it by de­fault), or a mod­er­a­tor can de­cide to pro­mote your con­tent (if you al­lowed pro­mot­ing on that spe­cific post). The front­page it­self is sorted by a scor­ing sys­tem based on the HN al­gorithm, which uses a com­bi­na­tion of to­tal karma and how much time has passed since the cre­ation of the post.

If you write a good com­ment on a post a mod­er­a­tor or a high-karma user can pro­mote that com­ment to the front­page as well, where we will also fea­ture the best com­ments on re­cent dis­cus­sions.


Meta will just be a sec­tion of the site to dis­cuss changes to mod­er­a­tion poli­cies, is­sues and bugs with the site, dis­cus­sion about site fea­tures, as well as gen­eral site-policy is­sues. Ba­si­cally the thing that all Stack­Ex­changes have. Karma here will not add to your to­tal karma and will not give you more in­fluence over the site.

Fea­tured posts

In ad­di­tion to the main thread, there is a pro­moted post sec­tion that you can sub­scribe to via email and RSS, that has on av­er­age three posts a week, which for now are just go­ing to be cho­sen by mod­er­a­tors and ed­i­tors on the site to be the posts that seem most im­por­tant to turn into com­mon-knowl­edge for the com­mu­nity.

Mee­tups (im­ple­men­ta­tion un­clear)

There will also be a sep­a­rate sec­tion of the site for mee­tups and event an­nounce­ments that will fea­ture a map of mee­tups, and gen­er­ally serve as a place to co­or­di­nate the in-per­son com­mu­ni­ties. The spe­cific im­ple­men­ta­tion of this is not yet fully figured out.

Short­form (im­ple­men­ta­tion un­clear)

Many au­thors (in­clud­ing Eliezer) have re­quested a sec­tion of the site for more short-form thoughts, more similar to the length of an av­er­age FB post. It seems rea­son­able to have a sec­tion of the site for that, though I am not yet fully sure how it should be im­ple­mented.


The goal of this struc­ture is to al­low users to post to LessWrong with­out their con­tent be­ing di­rectly ex­posed to the whole com­mu­nity. Their con­tent can first be shown to the peo­ple who fol­low them, or the peo­ple who ac­tively seek out con­tent from the broader com­mu­nity by scrol­ling through all new posts. Then, if a high-karma users among them finds their con­tent worth post­ing to the front­page, it will get pro­moted. The key to this is a larger user­base that has the abil­ity to pro­mote con­tent (i.e. many more than have the abil­ity to pro­mote con­tent to main on the cur­rent LessWrong), and the con­tinued fil­ter­ing of the front­page based on the karma level of the posts.

The goal of all of these is to al­low users to see good con­tent at var­i­ous lev­els of en­gage­ment with the site, while giv­ing some per­son­al­iza­tion op­tions so that peo­ple can fol­low the peo­ple they are par­tic­u­larly in­ter­ested and while also en­sur­ing that this does not sab­o­tage the at­tempt at build­ing com­mon knowl­edge by hav­ing the best posts from the whole ecosys­tem be fea­tured and pro­moted on the front­page.

The karma sys­tem:

Another thing I’ve been work­ing on to fix the sig­nal-to-noise ra­tio is to im­prove the karma sys­tem. It’s im­por­tant that the peo­ple hav­ing the most sig­nifi­cant in­sights are able to shape a field more. If you’re some­one who reg­u­larly pro­duces real in­sights, you’re bet­ter able to no­tice and bring up other good ideas. To achieve this we’ve built a new karma sys­tem, where your up­votes and down­votes weight more if you have a lot of karma already. So far the cur­rent weight­ing is a very sim­ple heuris­tic, whereby your up­votes and down­votes count for log base 5 of your to­tal karma. Ben and I will post an­other top-level post to dis­cuss just the karma sys­tem at some point in the next few weeks, but feel free to ask any ques­tions now, and we will just in­clude those in that post.

(I am cur­rently ex­per­i­ment­ing with a karma sys­tem based on the con­cept of eigen­democ­racy by Scott Aaron­son, which you can read about here, but which ba­si­cally boils down to ap­ply­ing Google’s PageRank al­gorithm to karma al­lo­ca­tion. How trusted you are as a user (your karma) is based on how much trusted users up­vote you, and the cir­cu­lar­ity of this defi­ni­tion is solved us­ing lin­ear alge­bra.)

I am also in­ter­ested in hav­ing some form of two-tiered vot­ing, similarly to how Face­book has a pri­mary vote in­ter­ac­tion (the like) and a sec­ondary in­ter­ac­tion that you can ac­cess via a tap or a hover (an­gry, sad, heart, etc.). But the im­ple­men­ta­tion of that is also cur­rently un­de­ter­mined.


The third and last bot­tle­neck is an ac­tu­ally work­ing mod­er­a­tion sys­tem that is fun to use by mod­er­a­tors, while also giv­ing peo­ple whose con­tent was mod­er­ated a sense of why, and how they can im­prove.

The most com­mon, ba­sic com­plaint cur­rently on LessWrong per­tains to trolls and sock­pup­pet ac­counts that the red­dit fork’s mod tools are vastly in­ad­e­quate for deal­ing with (Scott’s sixth point refers to this). Ray­mond Arnold and I are cur­rently build­ing more nu­anced mod tools, that in­clude abil­ities for mod­er­a­tors to set the past/​fu­ture votes of a user to zero, to see who up­voted a post, and to know the IP ad­dress that an ac­count comes from (this will be ready by the open beta).

Be­sides that, we are cur­rently work­ing on cul­ti­vat­ing a mod­er­a­tion group we are call­ing “Sun­sh­ine Reg­i­ment.” Mem­bers of the sun­sh­ine reg­i­ment that will have the abil­ity to take var­i­ous smaller mod­er­a­tion ac­tions around the site (such as tem­porar­ily sus­pend­ing com­ment threads, mak­ing gen­eral mod­er­at­ing com­ments in a dis­tinct font and pro­mot­ing con­tent), and so will have the abil­ity to gen­er­ally shape the cul­ture and con­tent of the web­site to a larger de­gree.

The goal is mod­er­a­tion that goes far be­yond deal­ing with trolls, and ac­tively makes the epistemic norms a ubiquitous part of the web­site. Right now Ben Pace is think­ing about mod­er­a­tion norms that en­courage archiv­ing and sum­ma­riz­ing good dis­cus­sion, as well as other pat­terns of con­ver­sa­tion that will help the com­mu­nity make in­tel­lec­tual progress. He’ll be post­ing to the open beta to dis­cuss what norms the site and mod­er­a­tors should have in the com­ing weeks. We’re both in agree­ment that mod­er­a­tion can and should be im­proved, and that mod­er­a­tors need bet­ter tools, and would ap­pre­ci­ate good ideas about what else to give them.

How you can help and is­sues to dis­cuss:

The open beta of the site is start­ing in a week, and so you can see all of this for your­self. For the du­ra­tion of the open beta, we’ll con­tinue the dis­cus­sion on the beta site. At the con­clu­sion of the open beta, we plan to have a vote open to those who had a thou­sand karma or more on 913 to de­ter­mine whether we should move for­ward with the new site de­sign, which would move to the less­wrong.com url from its tem­po­rary beta lo­ca­tion, or leave LessWrong as it is now. (As this would rep­re­sent the failure of the plan to re­vive LW, this would likely lead to the site be­ing archived rather than stay­ing open in an un­main­tained state.) For now, this is an op­por­tu­nity for the cur­rent LessWrong com­mu­nity to chime in here and ob­ject to any­thing in this plan.

Dur­ing the open beta (and only dur­ing that time) the site will also have an In­ter­com but­ton in the bot­tom right cor­ner that al­lows you to chat di­rectly with us. If you run into any prob­lems, or no­tice any bugs, feel free to ping us di­rectly on there and Ben and I will try to help you out as soon as pos­si­ble.

Here are some is­sues where I dis­cus­sion would be par­tic­u­larly fruit­ful:

  • What are your thoughts about the karma sys­tem? Does an eigen­democ­racy based sys­tem seem rea­son­able to you? How would you im­ple­ment the de­tails? Ben and I will post our cur­rent thoughts on this in a sep­a­rate post in the next two weeks, but we would be in­ter­ested in peo­ple’s un­primed ideas.

  • What are your ex­pe­riences with the site so far? Is any­thing glar­ingly miss­ing, or are there any bugs you think I should definitely fix?

  • Do you have any com­plaints or thoughts about how work on LessWrong 2.0 has been pro­ceed­ing so far? Are there any wor­ries or is­sues you have with the peo­ple work­ing on it?

  • What would make you per­son­ally use the new LessWrong? Is there any spe­cific fea­ture that would make you want to use it? For refer­ence, here is our cur­rent fea­ture roadmap for LW 2.0.

  • And most im­por­tantly, do you think that the LessWrong 2.0 pro­ject is doomed to failure for some rea­son? Is there any­thing im­por­tant I missed, or some­thing that I mi­s­un­der­stood about the ex­ist­ing cri­tiques?

The closed beta can be found at www.lesser­wrong.com.

Ben, Vaniver, and I will be in the com­ments!