ialdabaoth is banned

iald­abaoth is banned from LessWrong, be­cause I think he is ma­nipu­la­tive in ways that will pre­dictably make the epistemic en­vi­ron­ment worse. This ban is un­usual in sev­eral re­spects: it re­lies some­what heav­ily on ev­i­dence from in-per­son in­ter­ac­tions and ma­te­rial posted to other sites as well as posts on LessWrong, and the user in ques­tion has been an ac­tive user of the site for a long time. While this de­ci­sion was made in the con­text of other ac­cu­sa­tions, I think it can be jus­tified solely on epistemic con­cerns. I also ex­plain some of the rea­son for the de­lay be­low.

How­ever, in the in­ter­ests of fair­ness, and be­cause we be­lieve ideas from ques­tion­able sources can be valid, we’ll make ed­its he sug­gests to his post Affor­dance Widths so that it can fully par­ti­ci­pate in the 2018 Re­view. My hope is that an­nounc­ing this now will cause the dis­cus­sion on that post to be fo­cused solely on the post rather than so­cial co­or­di­na­tion about whether he should or should not be banned. (Com­men­tary on this de­ci­sion should hap­pen here.)

Some back­ground con­text:

Back in Septem­ber of 2018, I posted this com­ment about a dis­cus­sion in­volv­ing alle­ga­tions of se­ri­ous mis­con­duct, and said LW was not the place to con­duct in­ves­ti­ga­tions, but that it would be ap­pro­pri­ate to link to find­ings once the in­ves­ti­ga­tion con­cluded.

As far as I’m aware, he has made no claims of ei­ther guilt or in­no­cence, and went into ex­ile, in­clud­ing ceas­ing post­ing or com­ment­ing on LessWrong. To the best of my knowl­edge, none of the pan­els that con­ducted in­ves­ti­ga­tions posted find­ings, pri­mar­ily for rea­sons of le­gal li­a­bil­ity, and so there was never an ob­vi­ous time to pub­li­cly and trans­par­ently set­tle his sta­tus (un­til now).

One of the pri­mary benefits of courts is that they al­low for a cog­ni­tive spe­cial­iza­tion of la­bor, where a small num­ber of peo­ple can care­fully col­lect in­for­ma­tion, come to a con­sid­ered judg­ment, and then broad­cast that judg­ment. Though a num­ber of groups have run their own in­ves­ti­ga­tions and made calls about whether iald­abaoth is wel­come in their spaces, gen­er­ally choos­ing no, there has been no trans­par­ent and ac­countable pro­cess which has made pub­lic pro­nounce­ment on the alle­ga­tions brought against him.

About six months ago, iald­abaoth mes­saged Rae­mon, ask­ing if he was banned. Rae­mon replied that the team was con­sid­er­ing ban­ning him but had mul­ti­ple con­flict­ing lines of thought that hadn’t been worked through yet, and that if he com­mented Rae­mon or some­one else would re­spond with an­other com­ment mak­ing that state of af­fairs trans­par­ent.

I think that iald­abaoth poses a sub­stan­tial risk to our epistemic en­vi­ron­ment due to ma­nipu­la­tive epistemic tac­tics, based on our knowl­edge and ex­pe­rience of him. This is suffi­cient rea­son for the ban, and holds with­out in­ves­ti­gat­ing or mak­ing any sort of rul­ing on other alle­ga­tions. This ban is not in­tended to provide a rul­ing ei­ther way on other alle­ga­tions, as we have not con­ducted any in­ves­ti­ga­tion of our own into those alle­ga­tions, nor do we plan to, nor do we think we have the nec­es­sary re­sources for such work.

It does seem im­por­tant to point out that some of the stan­dards used to as­sess that risk stem from pro­cess­ing what hap­pened in the wake of the alle­ga­tions. A brief char­ac­ter­i­za­tion is that I think the com­mu­nity started to take more se­ri­ously not just the ques­tion of “will I be bet­ter off adopt­ing this idea?” but also the ques­tion “will this idea mis­lead some­one else, or does it seem de­signed to?”. If I had my cur­rent stan­dards in 2017, I think they would have sufficed to ban iald­abaoth then, or at least do more to iden­tify the need for ar­gu­ing against the mis­lead­ing parts of his ideas.

This pro­cess­ing was grad­ual, and iald­abaoth go­ing into ex­ile meant there wasn’t any time pres­sure. I think it’s some­what awk­ward that we be­came com­fortable with the sta­tus quo and didn’t no­tice when a month and then a year had passed with­out us mak­ing this state trans­par­ent, or do­ing the dis­cus­sion nec­es­sary to pre­pare this post. How­ever, with one of his posts nom­i­nated for 2018 in Re­view, this post be­came ur­gent as well as im­por­tant.

Some frame­works and rea­son­ing:

In mod­er­at­ing LessWrong, I don’t want to at­tempt to po­lice the whole world or even the whole in­ter­net. If some­one comes to LessWrong with an ac­cu­sa­tion that a LW user mis­treated them some­place else, the re­sponse is gen­er­ally “han­dle it there in­stead of here.” This is part of a de­sire to keep LW free of poli­tics and fac­tion­al­ism, and in­stead fo­cused on the de­vel­op­ment of shared tools and cul­ture, as well as cause is­sues to be set­tled in con­texts that have the nec­es­sary in­for­ma­tion. That said, it also seems to me like sen­si­ble Bayesi­anism to keep ev­i­dence from the rest of the world in mind when judg­ing be­hav­ior on the site, and pay­ing more at­ten­tion to users who we ex­pect to be prob­le­matic in one way or an­other.

But what does it mean that ideas from ques­tion­able sources can be valid? Ar­gu­ment screens off au­thor­ity, but au­thor­ity (pos­i­tive or nega­tive) has some effect. Con­sider these cases:

Sup­pose you are run­ning a physics jour­nal, and a con­victed mur­derer sends you a pa­per draft; you might feel some dis­gust at han­dling the pa­per, but it seems to me that the cor­rect thing to do is han­dle the pa­per like any other, and ac­cept it if the sci­ence checks out and re­ject it if it doesn’t. If your pri­mary goal is get­ting the best physics, blinded re­view seems use­ful; you don’t care very much whether or not the au­thor is vi­o­lent, and you care a lot about whether the thing they said was true. If, in­stead, the per­son was con­victed of man­u­fac­tur­ing data or the other sorts of sci­en­tific mis­con­duct that are difficult to de­tect with peer re­view, it seems jus­tified to sim­ply re­ject the sub­mis­sion. You also might not want them to give a talk at your con­fer­ence.

Sup­pose in­stead you are run­ning a trad­ing fund, and some­one pre­vi­ously con­victed of fraud sends you an idea for a new fi­nan­cial in­stru­ment. Here, it seems like you should be much more sus­pi­cious, not just of the idea but also of your abil­ity to suc­cess­fully no­tice the trap if there is one. It seems rele­vant now to check both whether the idea is true and whether or not it is ma­nipu­la­tive. Rather than just perform­ing a pro­cess that catches sim­ple mis­takes or omis­sions, one needs to perform a pro­cess that’s ro­bust to ac­tive at­tempts to mis­lead the judg­ing pro­cess.

Sup­pose in­stead you’re run­ning an en­ter­tain­ment busi­ness like a sports team, and some­one af­fili­ated with the team does some­thing un­pop­u­lar. Since the pri­mary goal you’re max­i­miz­ing is not any­thing epistemic, but in­stead how pop­u­lar you are, it seems effi­cient to pri­mar­ily act based on how the af­fili­a­tion af­fects your rep­u­ta­tion.

I think the mid­dle case is clos­est to the situ­a­tion we’re in now, for rea­sons like those dis­cussed in com­ments by jim­ran­domh and by Zack_M_Davis. Much of iald­abaoth’s out­put is claims about so­cial dy­nam­ics and rea­son­ing sys­tems that seem, at least in part, de­signed to ma­nipu­late the reader, ei­ther by mak­ing them more vuln­er­a­ble to pre­da­tion or more likely to ig­nore him /​ oth­er­wise give him room to op­er­ate.

While we can’t to­tally ig­nore rep­u­ta­tion costs, I cur­rently think LessWrong can and should con­sider rep­u­ta­tion costs as much less im­por­tant than epistemic costs. I don’t think we should ban peo­ple sim­ply for hav­ing bad rep­u­ta­tions or com­mit­ting non-epistemic crimes, but I think we should act vi­gor­ously to main­tain a healthy epistemic en­vi­ron­ment, which means both be­ing open and hav­ing an ac­tive im­mune sys­tem. This, of course, is not meant to be a com­men­tary on how in-per­son gath­er­ings should man­age who is and isn’t wel­come, as the dy­nam­ics of phys­i­cal mee­tups and so­cial com­mu­ni­ties are quite differ­ent than web­sites. When the two in­ter­sect, we do take se­ri­ously our duty of care to­wards our users and peo­ple in gen­eral.

Plan for in­clud­ing con­tent of banned users

In gen­eral, LessWrong does not re­move the posts or com­ments of banned users, with the ex­cep­tion of spam. It seems worth shar­ing our rough plan of what to do if a post by a banned user passes through an an­nual re­view, but it seems to me like the stan­dard mechanisms we have in place for the re­view will han­dle this pos­si­bil­ity grace­fully.

As with all posts, the post will only be in­cluded with the con­sent of the au­thor. If a post is con­tro­ver­sial for any rea­son, we may de­cide that in­clu­sion re­quires some sort of ed­i­tor’s com­men­tary or in­clu­sion of user com­ments or re­views, which would be shared with the au­thor be­fore they make their de­ci­sion to con­sent or not to in­clu­sion.