# Open & Welcome Thread—September 2020

If it’s worth say­ing, but not worth its own post, here’s a place to put it. (You can also make a short­form post)

And, if you are new to LessWrong, here’s the place to in­tro­duce your­self. Per­sonal sto­ries, anec­dotes, or just gen­eral com­ments on how you found us and what you hope to get from the site and com­mu­nity are wel­come.

If you want to ex­plore the com­mu­nity more, I recom­mend read­ing the Library, check­ing re­cent Cu­rated posts, see­ing if there are any mee­tups in your area, and check­ing out the Get­ting Started sec­tion of the LessWrong FAQ. If you want to ori­ent to the con­tent on the site, you can also check out the new Con­cepts sec­tion.

The Open Thread tag is here.

• To­day we have banned two users, curi and Periergo from LessWrong for two years each. The rea­son­ing for both is bit en­tan­gled but are over­all al­most com­pletely sep­a­rate, so let me go in­di­vi­d­u­ally:

Periergo is an ac­count that is pretty eas­ily trace­able to a per­son that Curi has been in con­flict with for a long time, and who seems to have signed up with the pri­mary pur­pose of at­tack­ing curi. I don’t think there is any­thing fun­da­men­tally wrong about sign­ing up to LessWrong to warn other users of the po­ten­tially bad be­hav­ior of an ex­ist­ing user on some other part of the in­ter­net, but I do think it should be done trans­par­ently.

It also ap­pears to be the case that he has done a bunch of things that go be­yond merely warn­ing oth­ers (like mailbomb­ing curi, i.e. sign­ing him up for tons of email spam that he didn’t sign up for, and lots of sock­pu­pet­ting on fo­rums that curi fre­quents), and that seem bet­ter clas­sified as ha­rass­ment, and over­all it seemed to me that this isn’t the right place for Periergo.

Curi has been a user on LessWrong for a long time, and has made many posts and com­ments. He also has the du­bi­ous honor of be­ing by far the most down­voted ac­count in all of LessWrong his­tory at −675 karma.

The biggest prob­lem with his par­ti­ci­pa­tion is that he has a his­tory of drag­ging peo­ple into dis­cus­sions that drag on for an in­cred­ibly long time, with­out seem­ing par­tic­u­larly pro­duc­tive, while also hav­ing a his­tory of pretty ag­gres­sively at­tack­ing peo­ple who stop re­spond­ing to him. On his blog, he and oth­ers main­tain a long list of peo­ple who en­gaged with him and oth­ers in the Crit­i­cal Ra­tion­al­ist com­mu­nity, but then stopped, in a way that is very hard to read as any­thing but a pub­lic at­tack. It’s first sen­tence is “This is a list of ppl who had dis­cus­sion con­tact with FI and then quit/​evaded/​lied/​etc.”, and in-par­tic­u­lar the fram­ing of “quit/​evaded/​lied” sure sets the fram­ing for the rest of the post as a kind of “wall of shame”.

Those three things in com­bi­na­tion, a propen­sity for long un­pro­duc­tive dis­cus­sions, a his­tory of threats against peo­ple who en­gage with him, and be­ing the his­tor­i­cally most down­voted ac­count in LessWrong his­tory, make me over­all think it’s bet­ter for curi to find other places as po­ten­tial dis­cus­sion venues.

I do re­ally want to make clear that this is not a per­sonal judge­ment of curi. While I do find the “List of Fal­lible Ideas Evaders” post pretty taste­less, and don’t like dis­cussing things with him par­tic­u­larly much, he seems well-in­ten­tioned, and it’s quite plau­si­ble that he could me an amaz­ing con­trib­u­tor to other on­line fo­rums and com­mu­ni­ties. Many of the things he is build­ing over on his blog seem pretty cool to me, and I don’t want oth­ers to up­date on this as be­ing much ev­i­dence about whether it makes sense to have curi in their com­mu­ni­ties.

I do also think his most re­cent se­ries of posts and com­ments is over­all much less bad than the posts and com­ments he posted a few years ago (where most of his nega­tive karma comes from), but they still don’t strike me as great con­tri­bu­tions to the LessWrong canon, are all low-karma, and I as­sign too high of a prob­a­bil­ity that old pat­terns will re­peat them­selves (and also that his pres­ence will gen­er­ally make peo­ple averse to be around, be­cause of those past pat­terns). He has also ex­plic­itly writ­ten a post in which he up­dates his LW com­ment­ing policy to­wards some­thing less de­mand­ing, and I do think that was the right move, but I don’t think it’s enough to tip the scales on this is­sue.

More broadly, LessWrong has seen a pretty sig­nifi­cant growth of new users in the past few months, mostly driven by in­ter­est in Coron­avirus dis­cus­sion and the dis­cus­sion we hosted on GPT3. I con­tinue to think that “Well-Kept Gar­dens Die By Paci­fism”, and that it is es­sen­tial for us to be very care­ful with han­dling that growth, and to gen­er­ally err on the side of cu­rat­ing our user­base pretty heav­ily and main­tain­ing high stan­dards. This means mak­ing difficult mod­er­a­tion de­ci­sion long be­fore it is proven “be­yond a rea­son­able doubt” that some­one is not a net-pos­i­tive con­trib­u­tor to the site.

In this case, I think it is definitely not proven be­yond a rea­son­able doubt that curi is over­all net-nega­tive for the site, and ban­ning him might well be a mis­take, but I think the prob­a­bil­ities weigh heav­ily enough in fa­vor of the net-nega­tive, and the worst-case out­comes are bad-enough, that on-net I think this is the right choice.

• To­day we have banned two users, curi and Periergo from LessWrong for two years each.

I wanted to re­ply to this be­cause I don’t think it’s right to judge curi the way you have. Periergo I don’t have an is­sue w/​. (it’s a sock­pup­pet acct any­way)

I think your de­ci­sion should not go un­ques­tioned/​un­crit­i­cized, which is why I’m post­ing. I also think you should re­con­sider curi’s ban un­der a sort of ap­peals pro­cess.

Also, the LW mod­er­a­tion pro­cess is ev­i­dently trans­par­ent enough for me to make this crit­i­cism, and that is no­table and good. I am grate­ful for that.

On his blog, he and oth­ers main­tain a long list of peo­ple who en­gaged with him and oth­ers in the Crit­i­cal Ra­tion­al­ist com­mu­nity, but then stopped, in a way that is very hard to read as any­thing but a pub­lic at­tack.

You are judg­ing curi and FI (Fal­lible Ideas) via your stan­dards (LW stan­dards), not FI’s stan­dards. I think this is prob­le­matic.

I’d like to note I am on that list. (like 12 way down) I am also a pub­lic figure in Aus­tralia, hav­ing founded a fed­eral poli­ti­cal party based on epistemic prin­ci­ples with nearly 9k mem­bers. I am okay with be­ing on that list. Ar­guably, if there is some­thing truly wrong with the list, I should have an is­sue with it. I knew about be­ing on that list ear­lier this year, be­fore I re­turned to FI. Be­ing on the list was not a fac­tor in my de­ci­sion.

There is noth­ing im­moral or mal­i­cious about curi.us/​2215. I can un­der­stand why you would find it dis­taste­ful, but that’s not a de­ci­sive rea­son to ban some­one or con­demn their ac­tions.

A few hours ago, curi and I dis­cussed el­e­ments about the ban and curi.us/​2215 on his stream. I recom­mend watch­ing a few min­utes start­ing at 5:50 and at 19:00, for trans­parency you might also be in­ter­ested in 23:40 → 24:00. (you can watch on 2x speed, should be fine)

Par­tic­u­larly, I dis­cuss my pres­ence on curi.us/​2215 at 5:50

You say:

a long list of peo­ple who en­gaged with him and oth­ers in the Crit­i­cal Ra­tion­al­ist community

There are 33 by my count (in­clud­ing me). The list spans a decade, and is there for a par­tic­u­lar pur­pose, and it is not to pub­li­cly shame peo­ple in to re­turn­ing, or to be mean for the sake of it. I’d like to point out some quotes from the first para­graph of curi.us/​2215:

This is a list of ppl who had dis­cus­sion con­tact with FI and then quit/​evaded/​lied/​etc. It would be good to find pat­terns about what goes wrong. Peo­ple who left are wel­come to come back and try again.

Notably, you don’t end up on the list if you are ac­tive. Also, al­though it’s not ex­plic­itly men­tioned in the top para­graph; a cru­cial thing is that those on the list have left and avoided dis­cus­sion about it. Dis­cus­sion is much more im­por­tant in FI than most philos­o­phy fo­rums—it’s how we learn from each other, make sure we un­der­stand, offer crit­i­cism and as­sist with er­ror cor­rec­tion. You’re not un­der any obli­ga­tion to dis­cuss some­thing, but if you have crit­i­cisms and re­fuse to share them: you’re pre­vent­ing er­ror cor­rec­tion; and if you leave to evade crit­i­cism then you’re not liv­ing by your val­ues and philos­o­phy.

The peo­ple listed on curi.us/​2215 have par­ti­ci­pated in a pub­lic philos­o­phy fo­rum for which there are es­tab­lished norms that are not typ­i­cal and are differ­ent from LW. FI views the act of truth-seek­ing differ­ently. While our (LW/​FI) schools of thought dis­agree on episte­mol­ogy, both schools have norms that are re­lated to their epistemic ideas. Ours look differ­ent.

It is un­fair to pun­ish some­one for an act done out­side of your ju­ris­dic­tion un­der differ­ent es­tab­lished norms. If curi were putting LW peo­ple on his list, or pub­lish­ing off-topic stuff at LW, sure, take mod­er­a­tion ac­tion. None of those things hap­pened. In fact, the main rea­son you’ve pro­vided for even know­ing about that list is via the sock­pup­pet you banned.

Sock­pup­pet ac­counts are not used to make the lives of their vic­tims eas­ier. By ban­ning curi along with Periergo you have fa­cil­i­tated a (minor) vic­tory for Periergo. This is not right.

a his­tory of threats against peo­ple who en­gage with him

THIS IS A SERIOUS ALLEGATION! PLEASE PROVIDE QUOTES

curi prefers to dis­cuss in pub­lic so they should be easy to find and ver­ify. I have never known curi to threaten peo­ple. He may crit­i­cise them, but he does not threaten them.

Notably, curi has con­sis­tently and loudly op­posed vi­o­lence and the ini­ti­a­tion of force, if peo­ple ask him to leave them alone (pro­vided they haven’t e.g. com­mit­ted a crime against him), he re­spects that.

be­ing the his­tor­i­cally most down­voted ac­count in LessWrong history

This is not a rea­son to ban him, or any­one. Be­ing dis­liked is not a rea­son for pun­ish­ment.

Those three things in com­bi­na­tion, a propen­sity for long un­pro­duc­tive dis­cus­sions, a his­tory of threats against peo­ple who en­gage with him, and be­ing the his­tor­i­cally most down­voted ac­count in LessWrong his­tory, make me over­all think it’s bet­ter for curi to find other places as po­ten­tial dis­cus­sion venues.

“a his­tory of threats against peo­ple who en­gage with him” has not been es­tab­lished or sub­stan­ti­ated.

he seems well-intentioned

I be­lieve he is. As far as I can tell he’s gone to great per­sonal ex­pense and trou­ble to keep FI al­ive for no other rea­son than that his sense of moral­ity de­mands it. (That might be over sim­plify­ing things, but I think the essence is the same. I think he be­lieves it is the right thing to do, and it is a nec­es­sary thing to do)

I do also think his most re­cent se­ries of posts and com­ments is over­all much less bad than the posts and com­ments he posted a few years ago (where most of his nega­tive karma comes from)

He has gained karma since re­turn­ing to LW briefly. I think you should re­tract the part about him hav­ing nega­tive karma b/​c it mis­rep­re­sents the situ­a­tion. He could have made a new ac­count and he would have pos­i­tive karma now. That means your judge­ment is based on past be­havi­our that was already pun­ished. This is dou­ble jeop­ardy. (Edit: af­ter some dis­cus­sion on FI it looks like this isn’t dou­ble jeop­ardy, just dou­ble pun­ish­ment. Dou­ble jeop­ardy speci­fi­cally refers to be­ing on trial for the same offense twice, not be­ing pun­ished twice.)

More­over, curi is be­ing pun­ished for be­ing hon­est and trans­par­ent. If he had reg­istered a new ac­count and hid­den his iden­tity, would you have banned him only based on his ac­tions this past 1-2 months? If you can say yes, then fine, but I don’t think your ar­gu­ment holds in this case the only part that is ver­ifi­able is based on your dis­ap­proval of his dis­cus­sion meth­ods. Disagree­ing with him is fine. I think a pro­por­tionate re­sponse would be a warn­ing.

As it stands no warn­ing was given, and no at­tempt to learn his plans was made. I think do­ing that would be pro­por­tionate and ap­pro­pri­ate. A ban is not.

It is sig­nifi­cant that curi is not able to dis­cuss this ban him­self. I am vol­un­tar­ily do­ing this, of my own ac­cord. He was not able to defend him­self or provide ex­pla­na­tion.

This is es­pe­cially prob­le­matic as you speci­fi­cally say you think he was im­prov­ing com­pared with his con­duct sev­eral years ago.

I do also think his most re­cent se­ries of posts and com­ments is over­all much less bad than the posts and com­ments he posted a few years ago (where most of his nega­tive karma comes from), but they still don’t strike me as great con­tri­bu­tions to the LessWrong canon

This alone is not enough. A warn­ing is pro­por­tionate.

are all low-karma

Un­pop­u­lar­ity is no rea­son for a ban

and I as­sign too high of a prob­a­bil­ity that old pat­terns will re­peat them­selves.

How is this differ­ent to pre-crime?

I think, given he had de­liber­ately changed his modus operandi weeks ago and has not posted in 13 days, this is un­fair and overly judg­men­tal.

You go on to say:

and I do think that was the right move, but I don’t think it’s enough to tip the scales on this is­sue.

What could curi have done differ­ently which would have tipped the scales? If there is no ac­cept­able thing he could have done, why was ac­tion not taken weeks ago when he was ac­tive?

I be­lieve it is fun­da­men­tally un­just to de­lay ac­tion in this fash­ion with­out talk­ing with him first. curi has an in­cred­ibly long track record of dis­cus­sion, he is very open to it. He is not some­one who avoids tak­ing re­spon­si­bil­ity for things; quite the op­po­site. If you had en­gaged him, I am con­fi­dent he would have dis­cussed things with you.

and to gen­er­ally err on the side of cu­rat­ing our user­base pretty heav­ily and main­tain­ing high stan­dards.

It makes sense that you want to cul­ti­vate the best ra­tio­nal fo­rums you can. I think that is a good goal. How­ever, again, there were other, less ex­treme and more pro­por­tionate ac­tions that could have been taken first, es­pe­cially see­ing as curi had changed his LW dis­cus­sion policy and was in­ac­tive at the time of the ban.

We pre­sum­ably dis­agree on the mean­ing of ‘high stan­dards’, but I don’t think that’s par­tic­u­larly rele­vant here.

This means mak­ing difficult mod­er­a­tion de­ci­sion long be­fore it is proven “be­yond a rea­son­able doubt” that some­one is not a net-pos­i­tive con­trib­u­tor to the site.

There were many al­ter­na­tive ac­tions you could have taken. For ex­am­ple, a 1-month ban. Restrict­ing curi to only post­ing on his own short­form. Warn­ing him of the cir­cum­stances and con­se­quences un­der con­di­tions, etc.

In this case, I think it is definitely not proven be­yond a rea­son­able doubt that curi is over­all net-nega­tive for the site

I’m glad you’ve men­tioned this, but LW is not a court of law and you are not bound to those stan­dards (and no pun­ish­ment here is com­pa­rable to the pun­ish­ment a court might dis­tribute). I think there are other good rea­sons for re­con­sid­er­ing curi’s ban.

ban­ning him might well be a mis­take, but I think the prob­a­bil­ities weigh heav­ily enough in fa­vor of the net-nega­tive, and the worst-case out­comes are bad-enough, that on-net I think this is the right choice.

I think there is a crit­i­cal point to be made here: you could have taken no ac­tion at this time and put a mod-no­tifi­ca­tion for ac­tivity on his ac­count. If he were to re­turn and do some­thing you deemed un­ac­cept­able, you could swiftly warn him. If he did it again, then a short-term ban. In­stead, this is a sledge-sized ban­ham­mer used when other op­tions were available. It is a de­ci­sion that is now pub­li­cly on LW and in­di­cates that LW is pos­si­bly in­tol­er­ant of things other than ir­ra­tional­ity. I don’t think this is re­flec­tive of LW, and I think it re­flects poorly on the mod­er­a­tion poli­cies here. I don’t think it needs to be that way, though.

I think a con­di­tional un­ban­ning (i.e. 1 warn­ing, with the next ac­tion be­ing a swift short ban) is an ap­pro­pri­ate ac­tion for the mod­er­a­tion team to make, and I im­plore you to re­con­sider your de­ci­sion.

If you think this is not ap­pro­pri­ate, then I re­quest you ex­plain why 2 years is an ap­pro­pri­ate length of time, and why Periergo and curi should have iden­ti­cal ban lengths.

The al­ter­na­tive to paci­fic­ity does not need to be so heavy handed.

I’d also like to note that curi has pub­lished a post on his blog re­gard­ing this ban; I read it af­ter draft­ing this re­ply: http://​​curi.us/​​2381-less-wrong-banned-me

• You are judg­ing curi and FI (Fal­lible Ideas) via your stan­dards (LW stan­dards), not FI’s stan­dards. I think this is prob­le­matic.

The above post ex­plic­itely says that the ban isn’t a per­sonal judge­ment of curi. It’s rather a ques­tion of whether it’s good or not to have curi around on LessWrong and that’s where LW stan­dards mat­ter.

Un­pop­u­lar­ity is no rea­son for a ban

That seems like a sen­ti­ment in­dica­tive of ig­nor­ing the rea­son for which he was banned. It was a util­i­tar­ian ar­gu­ment. The fact that some­one gets down­voted is Bayesian ev­i­dence that it’s not valuable for peo­ple to in­ter­act with him on LessWrong.

How is this differ­ent to pre-crime?

If you im­pri­sion some­one who mur­dered in the past be­cause you are afarid they mur­der again, that’s not pre-crime in most com­mon senses of the word.

Ad­di­tion­ally even if it would be, LW is not a place with virtue ethics stan­dards but one with util­i­tar­ian stan­dards. Tak­ing ac­tion to pre­vent things that are likely to nega­tively effect LW from hap­pen­ing in the fu­ture is perfectly fine with the idea of good gar­den­ing.

If you stand in your gar­den you don’t ask “what crimes did the plants com­mit and how should they be pun­ished?” but you fo­cus on the fu­ture.

• The above post ex­plic­itely says that the ban isn’t a per­sonal judge­ment of curi. It’s rather a ques­tion of whether it’s good or not to have curi around on LessWrong and that’s where LW stan­dards mat­ter.

Isn’t it even worse then b/​c no ac­tion was nec­es­sary?

But more to the point, isn’t the de­ter­mi­na­tion X per­son is not good to have around a per­sonal judge­ment? It doesn’t ap­ply to ev­ery­one else.

I think what habryka meant was that he wasn’t mak­ing a per­sonal judge­ment.

• This is not a rea­son to ban him, or any­one. Be­ing dis­liked is not a rea­son for pun­ish­ment.

The tra­di­tional guidance for up/​down­votes has been “up­vote what you would like want to see more of, down­vote what you would like to see less of”. If this is how votes are in­ter­preted, then heavy down­votes im­ply “the fo­rum’s users would on av­er­age pre­fer to see less con­tent of this kind”. Some­one post­ing the kind of con­tent that’s un­wanted on a fo­rum seems like a rea­son­able rea­son to bar that per­son from the fo­rum in ques­tion.

I agree with “be­ing dis­liked is not a rea­son for pun­ish­ment”, but peo­ple also have the right to choose who they want to spend their time with, even if some­one who they preferred not to spend time with viewed that as be­ing pun­ished. In my book, ban­ning peo­ple from a pri­vate fo­rum is more like “choos­ing not to in­vite some­one to your party again, af­ter they pre­vi­ously caused oth­ers to have a bad time” than it is like “pun­ish­ing some­one”.

• I’m a fan of solv­ing prob­lems with tech­nol­ogy. One way to solve this prob­lem of peo­ple not lik­ing an au­thor’s con­tent is to al­low users to put peo­ple on an ig­nore list (and maybe for some pe­riod of time).

• How many peo­ple here re­mem­ber Usenet’s kill files?

• The tra­di­tional guidance for up/​down­votes has been “up­vote what you would like want to see more of, down­vote what you would like to see less of”. If this is how votes are in­ter­preted, then heavy down­votes im­ply “the fo­rum’s users would on av­er­age pre­fer to see less con­tent of this kind”.

You’re us­ing quotes but I am not sure what you’re quot­ing, do you just mean to em­pha­size/​offset those clauses?

but peo­ple also have the right to choose who they want to spend their time with,

Sure, that might be part of the rea­son curi hadn’t been ac­tive on LW for 13 days at the time of the ban.

(con­tinued)

even if some­one who they preferred not to spend time with viewed that as be­ing pun­ished.

I don’t know if curi think’s it’s pun­ish­ment. I think it’s pun­ish­ment, and I think most ppl would agree that ‘A ban’ would be an an­swer to the ques­tion (in on­line fo­rum con­texts, gen­er­ally) ‘What is an ap­pro­pri­ate pun­ish­ment?’ That would mean a ban is a pun­ish­ment.

LW mods can do what they want; in essence it’s their site. I’m ar­gu­ing:

1. it’s unnecessary

2. it was done improperly

3. it re­flects badly on LW and cre­ates a hos­tile cul­ture to op­pos­ing ideas

4. (3) is an­ti­thet­i­cal to the open­ing lines of the LessWrong FAQ (which I quote be­low). Note: I’m in­tro­duc­ing this ar­gu­ment in this post, I didn’t men­tion it origi­nally.

5. sig­nifi­cant parts of habryka’s post were fac­tu­ally in­cor­rect. It was noted, btw, in FI that a) habryka’s com­ments were li­bel, and b) that curi’s re­ac­tion—quoted be­low—is mild and un­der­cuts habryka’s claim.

curi wrote (in his post on the LW ban)

Those three things in com­bi­na­tion, a propen­sity for long un­pro­duc­tive dis­cus­sions, a his­tory of threats against peo­ple who en­gage with him, and be­ing the his­tor­i­cally most down­voted ac­count in LessWrong his­tory, make me over­all think it’s bet­ter for curi to find other places as po­ten­tial dis­cus­sion venues.

I didn’t threaten any­one. I’m guess­ing it was a care­less word­ing. I think habryka should re­tract or clar­ify it. Above habryka used “at­tack[]” as a syn­onym for crit­i­cize. I don’t like that but it’s pretty stan­dard lan­guage. But I don’t think us­ing “threat[en]” as a syn­onym for crit­i­cize is rea­son­able.

“threaten” has mean­ings like “state one’s in­ten­tion to take hos­tile ac­tion against some­one in re­tri­bu­tion for some­thing done or not done” and “ex­press one’s in­ten­tion to harm or kill“ (New Oxford Dic­tionary). This is the one thing in the post that I strongly ob­ject to.

from the FI dis­cus­sion:

JustinCEO: i think curi’s re­sponse to this li­bel is writ­ten in a su­per mild way

JustinCEO: which no­tably con­trasts with be­ing the sort of per­son who would have “a his­tory of threats against peo­ple who en­gage with him” in the first place

LessWrong FAQ (origi­nal em­pha­sis)

LessWrong is a com­mu­nity ded­i­cated to im­prov­ing our rea­son­ing and de­ci­sion-mak­ing. We seek to hold true be­liefs and to be effec­tive at ac­com­plish­ing our goals. More gen­er­ally, we want to de­velop and prac­tice the art of hu­man ra­tio­nal­ity.

To that end, LessWrong is a place to 1) de­velop and train ra­tio­nal­ity, and 2) ap­ply one’s ra­tio­nal­ity to real-world prob­lems.

I don’t think the things peo­ple have de­scribed (in this thread) as seemly im­por­tant parts of LW are at all re­flected by this quote, rather, they con­tra­dict it.

• sig­nifi­cant parts of habryka’s post were fac­tu­ally in­cor­rect.

I am not cur­rently aware of any fac­tual in­ac­cu­ra­cies, but would be happy to cor­rect any you point out.

The only thing you pointed out was some­thing about the word “threat” be­ing wrong, but that only ap­pears to be true un­der some very nar­row defi­ni­tion of threat. This might be weird ra­tio­nal­ist jar­gon, but I’ve re­li­ably used the word “threat” to sim­ply mean sig­nal­ing some kind of in­ten­tion of in­flict­ing some kind pun­ish­ment in re­sponse to some con­di­tion by the other per­son. Curi and other peo­ple from FI have done this re­peat­edly, and the “list of peo­ple who have evaded/​lied/​etc.” is ex­actly one of such threats, whether ex­plic­itly la­beled as such or not.

The av­er­age LessWrong user would pretty sub­stan­tially re­gret hav­ing en­gaged with curi if they later end up on that list, so I do think it’s a pretty con­crete pun­ish­ment, and while there might be some chance you are un­aware of the nega­tive con­se­quences, this doesn’t re­ally change the re­al­ity very much that due to the way I’ve seen curi ac­tive on the site, en­gag­ing with him is a trap that peo­ple are likely to re­gret.

• I’ve re­li­ably used the word “threat” to sim­ply mean sig­nal­ing some kind of in­ten­tion of in­flict­ing some kind pun­ish­ment in re­sponse to some con­di­tion by the other per­son. Curi and other peo­ple from FI have done this re­peat­edly, and the “list of peo­ple who have evaded/​lied/​etc.” is ex­actly one of such threats, whether ex­plic­itly la­beled as such or not.

This game-the­o­retic con­cept of “threat” is fine, but un­der­de­ter­mined: what counts as a threat in this sense de­pends on where the the “zero point” is; what counts as ag­gres­sion ver­sus self-defense de­pends on what the rele­vant “prop­erty rights” are. (Scare quotes on “prop­erty rights” be­cause I’m not talk­ing about le­gal claims, but “prop­erty rights” is an apt choice of words, be­cause I’m claiming that the way peo­ple ne­go­ti­ate dis­putes that don’t rise to the level of drag­ging in the (slow, ex­pen­sive) for­mal le­gal sys­tem, have a similar struc­ture.)

If peo­ple have a “right” to not be pub­li­cly de­scribed as ly­ing, evad­ing, &c., then some­one who puts up a “these peo­ple lied, evaded, &c.” page on their own web­site is en­gag­ing in a kind of ag­gres­sion. The page func­tions as a threat: “If you don’t keep en­gag­ing in a way that satis­fies my stan­dards of dis­course, I’ll pub­li­cly call you a liar, evader, &c..”

If peo­ple don’t have a “right” to not be pub­li­cly de­scribed as ly­ing, evad­ing, &c., then a web­site ad­minis­tra­tor who cites a user’s “these peo­ple lied, evaded, &c.” page on their own web­site as part of a ra­tio­nale for ban­ning that user, is en­gag­ing in a kind of ag­gres­sion. The ban func­tions as a threat: “If you don’t cede your claim on be­ing able to de­scribe other peo­ple as ly­ing, evad­ing, &c., I won’t let you par­ti­ci­pate in this fo­rum.”

The size of the web­site ad­minis­tra­tor’s threat de­pends on the web­site’s “mar­ket power.” Less Wrong is prob­a­bly small enough and niche enough such that the threat doesn’t end up con­trol­ling any­one’s off-site be­hav­ior: any­one who per­ceives not be­ing able to post on Less Wrong as a se­ri­ous threat is prob­a­bly already so deeply so­cially-em­bed­ded into our lit­tle robot cult, that they ei­ther have similar prop­erty-rights in­tu­itions as the ad­minis­tra­tors, or are too loyal to the group to pub­li­cly ac­cuse other group mem­bers as ly­ing, evad­ing, &c., even if they pri­vately think they are ly­ing, evad­ing, &c.. (No­body likes self-styled whistle­blow­ers!) But get­ting kicked off a ser­vice with the mar­ket power of a Google, Face­book, Twit­ter, &c. is a suffi­ciently big deal to suffi­ciently many peo­ple such that those web­sites’ terms-of-ser­vice do ex­ert some con­trol­ling pres­sure on the rest of So­ciety.

What are the con­se­quences of each of these “prop­erty rights” regimes?

In a world where peo­ple have a right to not be pub­li­cly de­scribed as ly­ing, evad­ing, &c., then peo­ple don’t have to be afraid of los­ing rep­u­ta­tion on that ac­count. But we also lose out on the pos­si­bil­ity of hav­ing a pub­lic ac­count­ing of who has ac­tu­ally in fact lied, evaded, &c.. We give up on main­tain­ing the co­or­di­na­tion equil­ibrium such that words like “lie” have a literal mean­ing that can ac­tu­ally be true or false, rather than the word it­self sim­ply con­sti­tut­ing an at­tack.

Which regime bet­ter fulfills our char­ter of ad­vanc­ing the art of hu­man ra­tio­nal­ity? I don’t think I’ve writ­ten this skil­lfully enough for you to not be able to guess what an­swer I lean to­wards, but you shouldn’t trust my an­swer if it seems like some­thing I might lie or evade about! You need to think it through for your­self.

• For what it’s worth, I think a de­ci­sion to ban would stand on just his pur­suit of con­ver­sa­tional norms that re­ward stamina over cor­rect­ness, in a way that I think makes LessWrong worse at in­tel­lec­tual progress. I didn’t check out this page, and it didn’t fac­tor into my sense that curi shouldn’t be on LW.

I also find it some­what wor­ry­ing that, as I un­der­stand it, the page was a com­bi­na­tion of “quit”, “evaded”, and “lied”, of which ‘quit’ is not wor­ry­ing (I con­sider some­one giv­ing up on a con­ver­sa­tion with curi un­der­stand­able in­stead of shame­ful), and that get­ting wrapped up in the “&c.” in­stead of be­ing the cen­tral ex­am­ple seems like it’s defin­ing away my main crux.

• To elab­o­rate on this, I think there are two dis­tinct is­sues: “do they have the right norms?” and “do they do norm en­force­ment?”. The sec­ond is nor­mally good in­stead of prob­le­matic, but makes the first much more im­por­tant than it would be oth­er­wise. I see Zack_M_Davis as point­ing out “hey, if we don’t let peo­ple en­force norms be­cause that would make norm­break­ers feel threat­ened, do we even have norms?”, which is a valid point, but which feels some­what ir­rele­vant to the curi ques­tion.

• If I un­der­stand you cor­rectly then your pri­mary ar­gu­ment ap­pears to be that a ban is (1) too harsh a judg­ment where a warn­ing would have sufficed, (2) that curi ought to have some sort of ap­peals pro­cess and (3) that habryka’s top-level com­ment does not provide de­tailed cita­tions for all the ac­cu­sa­tions against curi.

(1) Curi was warned at least once.

(2) Curi is be­ing banned for wast­ing time with long, un­pro­duc­tive con­ver­sa­tions. An ap­peals pro­cess would pro­duce an­other long, un­pro­duc­tive con­ver­sa­tion.

(3) Spe­cific quotes are un­nec­es­sary. It blind­ingly ob­vi­ous from a glance through curi’s pro­file and even curi’s re­sponse you linked to that curi is dam­ag­ing to pro­duc­tive di­alogue on Less Wrong.

The strongest claim against curi is “a his­tory of threats against peo­ple who en­gage with him [curi]”. I was able to con­firm this via a quick glance through curi’s past be­hav­ior on this site. In this com­ment curi threat­ens to es­ca­late a di­alogue by mir­ror­ing it off of this web­site. By the stan­dards of col­lab­o­ra­tive on­line di­alogue, this con­sti­tutes a threat against some­one who en­gaged with him.

Edit: gram­mar.

• lsusr said:

(1) Curi was warned at least once.

I’m rea­son­ably sure the slack com­ments refers to events 3 years ago, not any­thing in the last few months. I’ll check, though.

There are some other com­ments about re­cent dis­cus­sion in that thread, like this: https://​​www.less­wrong.com/​​posts/​​iAnXcZ5aGZzNc2J8L/​​the-law-of-least-effort-con­tributes-to-the-con­junc­tion?com­men­tId=38FzXA6g54ZKs3HQY

gjm said:

I had not looked, at that point; I took “mir­rored” to mean tak­ing copies of whole dis­cus­sions, which would im­ply copy­ing other peo­ple’s writ­ing en masse. I have looked, now. I agree that what you’ve put there so far is prob­a­bly OK both legally and morally.

My apolo­gies for be­ing a bit twitchy on this point; I should maybe ex­plain for the benefit of other read­ers that the last time curi came to LW, he did take a whole pile of dis­cus­sion from the LW slack and copy it en masse to the pub­li­cly-visi­ble in­ter­net, which is one rea­son why I thought it plau­si­ble he might have done the same this time.

I don’t think there is case for (1). Un­less gjm is a mod and there are things I don’t know?

lsusr said:

(2) Curi is be­ing banned for wast­ing time with long, un­pro­duc­tive con­ver­sa­tions. An ap­peals pro­cess would pro­duce an­other long, un­pro­duc­tive con­ver­sa­tion.

habryka ex­plic­itly men­tions curi chang­ing his LW com­ment­ing policy to be ‘less de­mand­ing’. I can see the mo­ti­va­tion for ex­pe­di­tion, but the mods don’t have to speedrun it. I think it’s bad there wasn’t any com­mu­ni­ca­tion be­fore­hand.

lsusr said:

(3) Spe­cific quotes are un­nec­es­sary. It blind­ingly ob­vi­ous from a glance through curi’s pro­file and even curi’s re­sponse you linked to that curi is dam­ag­ing to pro­duc­tive di­alogue on Less Wrong.

I don’t think that’s the case. His net karma has in­creased, and judg­ing him for con­tent on his blog—not his con­tent on LW—does not es­tab­lish whether he was ‘dam­ag­ing to pro­duc­tive di­alogue on Less Wrong’.

His posts on less wrong have been con­tri­bu­tions, for ex­am­ple, www.less­wrong.com/​posts/​tKcdTsMFkYjnFEQJo/​can-so­cial-dy­nam­ics-ex­plain-con­junc­tion-fal­lacy-ex­per­i­men­tal is a di­rect re­sponse to of EY’s posts and it was net-up­voted. He fol­lowed that up with two more net-up­voted posts:

This is not the track record of some­one want­ing to waste time. I know there are dis­agree­ments be­tween LW and curi /​ FI. If that’s the main point of con­tention, and that’s why he’s be­ing banned, then so be it. But he doesn’t de­serve to mis­treated and have base­less ac­cu­sa­tions thrown at him.

lsusr said:

The strongest claim against curi is “a his­tory of threats against peo­ple who en­gage with him [curi]”. I was able to con­firm this via a quickly glance through curi’s past be­hav­ior on this site. In this com­ment threat­ens to es­ca­late a di­alogue by mir­ror­ing it off of this web­site. By the stan­dards of col­lab­o­ra­tive on­line di­alogue, this con­sti­tutes a threat against some­one who en­gaged with him.

We have sub­stan­tial dis­agree­ments about what con­sti­tutes a threat, in that case. I think a threat needs to in­volve some­thing like dan­ger, or vi­o­lence, or some­thing like that. It’s not a ‘threat’ to copy pub­lic dis­cus­sion un­der fair use for crit­i­cism and com­men­tary.

I googled the defi­ni­tion, and these are the two (for define:threat)

• a state­ment of an in­ten­tion to in­flict pain, in­jury, dam­age, or other hos­tile ac­tion on some­one in re­tri­bu­tion for some­thing done or not done.

• a per­son or thing likely to cause dam­age or dan­ger.

Nei­ther of these ap­ply.

• I googled the defi­ni­tion, and these are the two (for define:threat)

• a state­ment of an in­ten­tion to in­flict pain, in­jury, dam­age, or other hos­tile ac­tion on some­one in re­tri­bu­tion for some­thing done or not done.

• a per­son or thing likely to cause dam­age or dan­ger.

Nei­ther of these ap­ply.

I pre­fer this defi­ni­tion, “a dec­la­ra­tion of an in­ten­tion or de­ter­mi­na­tion to in­flict pun­ish­ment, in­jury, etc., in re­tal­i­a­tion for, or con­di­tion­ally upon, some ac­tion or course; men­ace”. I think the word “re­tri­bu­tion” im­plies un­due jus­tice. A “threat” need only im­ply re­tal­i­a­tion, not re­tri­bu­tion, of hos­tile ac­tion.

We have sub­stan­tial dis­agree­ments about what con­sti­tutes a threat,

Ev­i­dently yes, as do dic­tio­nar­ies.

• This is the defi­ni­tion that I had in mind when I wrote the no­tice above, sorry for any con­fu­sion it might have caused.

• define:threat

I pre­fer this defi­ni­tion, “a dec­la­ra­tion of an in­ten­tion or de­ter­mi­na­tion to in­flict pun­ish­ment, in­jury, etc., in re­tal­i­a­tion for, or con­di­tion­ally upon, some ac­tion or course; men­ace”.

This defi­ni­tion seems okay to me.

un­due justice

I don’t know how jus­tice can be un­due, do you mean like un­due or ex­ces­sive pros­e­cu­tion? or per­se­cu­tion per­haps? thought I don’t think ei­ther pros­e­cu­tion or per­se­cu­tion de­scribe any­thing curi’s done on LW. If you have coun­terex­am­ples I would ap­pre­ci­ate it if you could quote them.

We have sub­stan­tial dis­agree­ments about what con­sti­tutes a threat,

Ev­i­dently yes, as do dic­tio­nar­ies.

I don’t think the dic­tio­nary defi­ni­tions dis­agree much. It’s not a sub­stan­tial dis­agree­ment. the­saurus.com seems to agree; it lists them as ~strong syn­onyms. the crux is re­tri­bu­tion vs re­tal­i­a­tion, and re­tal­i­a­tion is more gen­eral. The mafia can threaten shop­keeps with vi­o­lence if they don’t pay pro­tec­tion. I think re­tal­i­a­tion is a bet­ter fit­ting word.

How­ever, this still does not ap­ply to any­thing curi has done!

• I do not think the core dis­agree­ment be­tween you and me comes from a failure of me to ex­plain my thoughts clearly enough. I do not be­lieve that elab­o­rat­ing upon my rea­son­ing would get you to change your mind about the core dis­agree­ment. Elab­o­rat­ing upon my po­si­tion would there­fore waste both of our time.

The same goes for your po­si­tion. The many words you have already writ­ten have failed to move me. I do not ex­pect even more words to change this pat­tern.

Curi is be­ing banned for wast­ing time with long, un­pro­duc­tive con­ver­sa­tions. It would be ironic for me to em­broil my­self in such a con­ver­sa­tion as a con­se­quence.

• I do not think the core dis­agree­ment be­tween you and me comes from a failure of me to ex­plain my thoughts clearly enough.

I don’t ei­ther.

The same goes for your po­si­tion. The many words you have already writ­ten have failed to move me. I do not ex­pect even more words to change this pat­tern.

Sure, we can stop.

Curi is be­ing banned for wast­ing time with long, un­pro­duc­tive con­ver­sa­tions.

I don’t know any­where I could go to find out that this is a bannable offense. If it is not in a body of rules some­where, then it should be added. If the mods are un­will­ing to add it to the rules, he should be un­banned, sim­ple as that.

Maybe that idea is worth dis­cussing? I think it’s rea­son­able. If some­thing is an offense it should be pub­li­cly stated as such and new and con­tin­u­ing users should be able to point to it and say “that’s why”. It shouldn’t feel like it was made up on the fly as a spe­cial case—it’s a prob­lem when new rules are in­vented ad-hoc and not canon­i­cal­ized (I don’t have a prob­lem with JIT rule­books, it’s prac­ti­cal).

• Ar­guably, if there is some­thing truly wrong with the list, I should have an is­sue with it.

This is non-ob­vi­ous. It seems like you are ex­trap­o­lat­ing from your­self to ev­ery­one else. In my model, how much you would mind be­ing on such a list is largely de­ter­mined by how much so­cial anx­iety you gen­er­ally feel. I would very much mind be­ing on that list, even if I felt like it was jus­tified.

Know­ing the ex­is­tence of the list (again, even if it were jus­tified) would also make me un­easy to talk to curi.

• Ar­guably, if there is some­thing truly wrong with the list, I should have an is­sue with it.

This is non-ob­vi­ous. It seems like you are ex­trap­o­lat­ing from your­self to ev­ery­one else. In my model, how much you would mind be­ing on such a list is largely de­ter­ment by how much so­cial anx­iety you gen­er­ally feel. I would very much mind be­ing on that list, even if I felt like it was jus­tified.

I think this is fair, and ad­di­tion­ally I maybe shouldn’t have used the word “truly”; it’s a very laden word. I do think that, on the bal­ance of prob­a­bil­ities, my case does re­duce the like­li­hood of some­thing be­ing foun­da­tion­ally wrong with it, though. (Note: I’ve said this in, what I think, is a LW friendly way. I’d say it differ­ently on FI.)

One thing I do think, though, is that peo­ple’s so­cial anx­iety does not make things in gen­eral right or wrong, but can be de­ci­sive wrt think­ing about a sin­gle ac­tion.

Another thing to point out is anony­mous par­ti­ci­pa­tion in FI is okay, it’s rea­son­ably easy to use an anony­mous/​pseudony­mous email to start with. curi’s blog/​fo­rum hy­brid also al­lows for anony­mous post­ing. FI is very pro-free-speech.

Know­ing the ex­is­tence of the list (again, even if it were jus­tified) would also make me un­easy to talk to curi.

I think that’s okay, curi isn’t try­ing to at­tract ev­ery­one as an au­di­ence, and FI isn’t de­signed to be a fo­rum which makes peo­ple feel com­fortable, as such. It has differ­ent goals from e.g. LW or a philos­o­phy sub­red­dit.

I think we’d agree that norms at FI aren’t typ­i­cal and aren’t for ev­ery­one. It’s a place where any­one can post, but that doesn’t mean that ev­ery­one should, sorta thing.

• That means your judge­ment is based on past be­havi­our that was already pun­ished.

I don’t un­der­stand this sen­tence at all. How has he already been pun­ished for his past be­hav­ior? In­deed, he has never been banned be­fore, so there was never any pre­vi­ous pun­ish­ment.

• I wel­come the trans­parency, but this “I don’t want oth­ers to up­date on this as be­ing much ev­i­dence about whether it makes sense to have curi in their com­mu­ni­ties” seems a bit weird to me. “a propen­sity for long un­pro­duc­tive dis­cus­sions, a his­tory of threats against peo­ple who en­gage with him” and “I as­sign too high of a prob­a­bil­ity that old pat­terns will re­peat them­selves” seem like quite a judge­ment and why would some­one else not up­date on this? Ad­di­tion­ally, I think that while a ban is some­times nec­es­sary (e.g. ha­rass­ment), a 2-year ban seems like quite a jump. I could think of a num­ber of differ­ent sanc­tions, e.g. block­ing some­one from com­ment­ing in gen­eral; giv­ing users the op­tion to block some­one from com­ment­ing; block­ing some­one from writ­ing any­thing; limit­ing some­one’s au­thor­ity to her own short­form; all of these things for some time.

• “I don’t want oth­ers to up­date on this as be­ing much ev­i­dence about whether it makes sense to have curi in their com­mu­ni­ties” seems a bit weird to me. “a propen­sity for long un­pro­duc­tive dis­cus­sions, a his­tory of threats against peo­ple who en­gage with him” and “I as­sign too high of a prob­a­bil­ity that old pat­terns will re­peat them­selves” seem like quite a judge­ment and why would some­one else not up­date on this?

The key thing I wanted to com­mu­ni­cate is that it seems quite plau­si­ble to me that these pat­terns are the re­sult of curi in­ter­fac­ing speci­fi­cally with the LessWrong cul­ture in un­healthy ways. I can imag­ine him in­ter­fac­ing with other cul­tures with much less bad re­sults.

I also said “I don’t want oth­ers to think this is much ev­i­dence”, not “this is no ev­i­dence”. Of course it is some ev­i­dence, but I think over­all I would ex­pect peo­ple to up­date a bit too much on this, and as I said, I wouldn’t be very sur­prised to see curi par­ti­ci­pate well in other on­line com­mu­ni­ties.

• I also didn’t un­der­stand what your sen­tence was say­ing. It read to me as “I don’t want peo­ple to up­date on this post”. When you pointed speci­fi­cally to LW’s cul­ture (which is very ar­gu­men­ta­tive) pos­si­bly be­ing a key cause it was clearer what you were say­ing. Thanks for the clar­ifi­ca­tion (and for try­ing to avoid nega­tive mis­in­ter­pre­ta­tions of your com­ment).

• Ad­di­tion­ally, I think that while a ban is some­times nec­es­sary (e.g. ha­rass­ment), a 2-year ban seems like quite a jump. I could think of a num­ber of differ­ent sanc­tions, e.g. block­ing some­one from com­ment­ing in gen­eral; giv­ing users the op­tion to block some­one from com­ment­ing; block­ing some­one from writ­ing any­thing; limit­ing some­one’s au­thor­ity to her own short­form; all of these things for some time.

I am not sure. I re­ally don’t like the world where some­one is banned from com­ment­ing on other peo­ple’s posts, but can still make top-level posts, or is banned from mak­ing top-level posts but can still com­ment. Both of these end up in re­ally weird equil­ibria where you some­times can’t re­ply to con­ver­sa­tions you started and re­spond to ob­jec­tions other peo­ple make to your ar­gu­ments, and that just seems re­ally bad.

I also don’t re­ally know what those things would have done. I don’t think those things would have re­duced the un­cer­tainty of whether curi is a good fit for LessWrong su­per much, and feel like they could have just dragged things out into a long pe­riod of con­flict that would have been more stress­ful for ev­ery­one.

The “block­ing some­one from writ­ing any­thing” does feel like an op­tion. Like, at least you can still vote and read. I do think that seems po­ten­tially like the bet­ter op­tion, but I don’t think we cur­rently ac­tu­ally have the tech­ni­cal in­fras­truc­ture to make that hap­pen. I might con­sider build­ing that for fu­ture oc­ca­sions like this.

• The “block­ing some­one from writ­ing any­thing” does feel like an op­tion. Like, at least you can still vote and read. I do think that seems po­ten­tially like the bet­ter op­tion, but I don’t think we cur­rently ac­tu­ally have the tech­ni­cal in­fras­truc­ture to make that hap­pen. I might con­sider build­ing that for fu­ture oc­ca­sions like this.

Block­ing from writ­ing but al­low­ing to vote seems like a re­ally bad idea. Be­ing read-only is already available — that’s the ca­pa­bil­ity of any­one with­out an ac­count.

Gen­er­ally I’d be against com­pli­cated sub­sets of per­mis­sions for var­i­ous classes of dis­favoured mem­bers. Sim­pler to say that some­one is ei­ther a mem­ber, or they’re not.

• Ad­di­tion­ally, I’d like to know whether peo­ple are warned be­fore they are banned, and whether they are asked about their own view of the mat­ter.

• Some­times peo­ple are warned, and some­times they aren’t, de­pend­ing on the cir­cum­stances. By vol­ume, the vast ma­jor­ity of our bans are spam­mers, who aren’t warned. Of users who have posted more than 3 posts to the site, I be­lieve over half (and prob­a­bly closer to 80%?) are warned, and many are warned and then not banned. [See this list.]

• Yeah, al­most ev­ery­one who we ban who has any real con­tent on the site is warned. It didn’t feel nec­es­sary for curi, be­cause he has already re­ceived so much feed­back about his ac­tivity on the site over the years (from many users as well as mods), and I saw very lit­tle prob­a­bil­ity of things chang­ing be­cause of a warn­ing.

• Yeah, al­most ev­ery­one who we ban who has any real con­tent on the site is warned. It didn’t feel nec­es­sary for curi, be­cause he has already re­ceived so much feed­back about his ac­tivity on the site over the years (from many users as well as mods), and I saw very lit­tle prob­a­bil­ity of things chang­ing be­cause of a warn­ing.

I think you’re deny­ing him an im­por­tant chance to do er­ror cor­rec­tion via that de­ci­sion. (This is a par­tic­u­larly im­por­tant con­cept in CR/​FI)

curi ev­i­dently wanted to change some things about his be­havi­our, oth­er­wise he wouldn’t have up­dated his com­ment­ing policy. How do you know he wouldn’t have up­dated it more if you’d warned him? That’s ex­actly the type of crit­i­cism we (CR/​FI) think is use­ful.

That sort of up­date is ex­actly the type of thing that would be rea­son­able to ex­pect next time he came back (con­sid­er­ing that he was away for 2 weeks when the ban was an­nounced). He didn’t want to be banned, and he didn’t want to have shitty dis­cus­sions, ei­ther. (I don’t know those things for cer­tain, but I have high con­fi­dence.)

What prob­a­bil­ity would you as­sign to him con­tin­u­ing just as be­fore if you said some­thing like “If you keep con­tin­u­ing what you’re do­ing, I will ban you. It’s for these rea­sons.” Ideally, you could add “Here they are in the rules/​faq/​what­ever”.

Prac­ti­cally, the chance of him chang­ing is lower now be­cause there isn’t any point if he’s never given any chances. So in some ways you were ex­actly right to think there’s low prob­a­bil­ity of him chang­ing, it’s just that it was due to your ac­tions. Ac­tions which don’t need to be per­ma­nent, might I add.

• I think you’re deny­ing him an im­por­tant chance to do er­ror cor­rec­tion via that de­ci­sion. (This is a par­tic­u­larly im­por­tant con­cept in CR/​FI)

I agree that if we wanted to ex­tend him more op­por­tu­ni­ties/​re­sources/​etc., we could, and that a ban is a de­ci­sion to not do that. But it seems to me like you’re fo­cus­ing on the benefit to him /​ “is there any chance he would get bet­ter?”, as op­posed to the benefit to the com­mu­nity /​ “is it rea­son­able to ex­pect that he would get bet­ter?”.

As stew­ards of the com­mu­nity, we need to make de­ci­sions tak­ing into ac­count both the di­rect im­pact (on curi for be­ing banned or not) and the in­di­rect im­pact (on other peo­ple de­cid­ing whether or not to use the site, or their ex­pe­rience be­ing bet­ter or worse).

• I’m not sure about other cases, but in this case curi wasn’t warned. If you’re in­ter­ested, he and I dis­cuss the ban in the first 30 mins of this stream

• I agree to your first para­graph.

Whether some­one is “good fit” already should be visi­ble by the Karma (and I think Karma then trans­lates into Karma points per Vote?) and I don’t see why that should ad­di­tion­ally lead to a ban or some­thing. A ban, or a writ­ing ban, could re­sult for de­struc­tive be­hav­ior.

I think there is no real point in hav­ing peo­ple blocked from read­ing. Writ­ing—ok (though af­ter all things start out as per­sonal blog posts in any case and don’t have to be made front­page posts).

• FYI I am on that list and fine with it—curi and I dis­cussed this post a bit here: https://​​www.youtube.com/​​watch?v=MxVzxS8uMto

I think you’re wrong on mul­ti­ple counts. Will re­ply more in a few hours.

• FYI and FWIW curi has up­dated the post to re­move emails and re­word the open­ing para­graph.

• I don’t re­call learn­ing in school that most of “the bad guys” from his­tory (e.g., Com­mu­nists, Nazis) thought of them­selves as “the good guys” fight­ing for im­por­tant moral rea­sons. It seems like teach­ing that fact, and in­still­ing moral un­cer­tainty in gen­eral into chil­dren, would pre­vent a lot of se­ri­ous man-made prob­lems (in­clud­ing prob­lems we’re see­ing play out to­day). So why hasn’t civ­i­liza­tion figured that out already? Or is not teach­ing moral un­cer­tainty some kind of Ch­ester­ton’s Fence, and teach­ing it widely would make the world even worse off on ex­pec­ta­tion?

• I won­der if any­one has ever writ­ten a man­i­festo for moral un­cer­tainty, maybe some­thing along the lines of:

We hold these truths to be self-ev­i­dent, that we are very con­fused about moral­ity. That these con­fu­sions should be prop­erly re­flected as high de­grees of un­cer­tainty in our moral epistemic states. That our moral un­cer­tain­ties should in­form our in­di­vi­d­ual and col­lec­tive ac­tions, plans, and poli­cies. … That we are also very con­fused about nor­ma­tivity and meta-ethics and don’t re­ally know what we mean by “should”, in­clud­ing in this doc­u­ment...

Yeah, I re­al­ize this would be a hard sell in to­day’s en­vi­ron­ment, but what if build­ing Friendly AI re­quires a civ­i­liza­tion sane enough to con­sider this com­mon sense? I mean, for ex­am­ple, how can it be a good idea to gift a su­per-pow­er­ful “cor­rigible” or “obe­di­ent” AI to a civ­i­liza­tion full of peo­ple with crazy amounts of moral cer­tainty?

• Non-du­al­ist philoso­phies such as Zen place high value on con­fu­sion (they call it “don’t know mind”) and have a so­phis­ti­cated frame­work for com­mu­ni­cat­ing this idea. Zen is one of the al­ter­na­tive in­tel­lec­tual tra­di­tions I al­luded to in my con­tro­ver­sial post about eth­i­cal progress.

The Dao De Jing 道德经, writ­ten 2.5 thou­sand years ago, in­cludes strong warn­ings against on­tolog­i­cal cer­tainty (and, by ex­ten­sion, moral cer­tainty). If we naïvely ap­ply the Lindy Effect then Chi­nese civ­i­liza­tion is likely to con­tinue for thou­sands more years while Western sci­ence an­nihilates it­self af­ter mere cen­turies. This may not be a co­in­ci­dence.

Here is the man­i­festo you are look­ing for:

道可道也，非恒道也。名可名也，非恒名也。无名，万物之始也；有名，万物之母也。故恒无欲也，以观其眇；恒有欲也，以观其所徼。两者同出，异名同谓。玄之又玄，众眇之门。

―Chap­ter 1 of the Dao De Jing 道德经

Un­for­tu­nately, the du­al­ity of empti­ness and form is difficult to trans­late into English.

• So why hasn’t civ­i­liza­tion figured that out already?

States evolve to per­pet­u­ate them­selves. Civ­i­liza­tion has figured it out (in the blind idiot god sense of “figured it out”) that moral un­cer­tainty is teach­able and de­creases trust in the state ide­ol­ogy. You have it back­ward. The states in ex­is­tence to­day pro­mote moral cer­tainty in chil­dren for ex­actly the same rea­son the Com­mu­nist and Nazi states did.

• Or is not teach­ing moral un­cer­tainty some kind of Ch­ester­ton’s Fence, and teach­ing it widely would make the world even worse off on ex­pec­ta­tion?

I ex­pect it is this. Gen­eral moral un­cer­tainty has all kinds of prob­lems in ex­pec­ta­tion, like:

• It ru­ins moral­ity as a co­or­di­na­tion mechanism among the group.

• It weak­ens moral con­vic­tion in the in­di­vi­d­ual, which is su­per bad from the per­spec­tive of peo­ple who be­lieve there are di­rect con­se­quences for a lack of con­vic­tion (like Hell).

• It cre­ates space for differ­ent and pos­si­bly weird moral­ities to arise; I don’t know of any moral sys­tems that think it is a good thing to be a mem­ber of a differ­ent moral sys­tem, so I ex­pect all the cur­rent moral sys­tems to agree on this one.

I feel like the first bul­let point is the real driv­ing force be­hind the prob­lems it would pre­vent, any­how. Mo­ral un­cer­tainty doesn’t cause peo­ple to do good things; it keeps them from do­ing good things (that are differ­ent from other groups’ defi­ni­tions of good things).

• So why hasn’t civ­i­liza­tion figured that out already? Or is not teach­ing moral un­cer­tainty some kind of Ch­ester­ton’s Fence, and teach­ing it widely would make the world even worse off on ex­pec­ta­tion?

This is sort of a re­hash of sibling com­ments, but I think there are two fac­tors to con­sider here.

The first is the rules. It is very im­por­tant that peo­ple drive on the cor­rect side of the road, and not have un­cer­tainty about which side of the road is cor­rect, and not very im­por­tant whether they have a dis­tinc­tion be­tween “cor­rect for <coun­try> in <year>” and “cor­rect ev­ery­where and for all time.”

The sec­ond is some­thing like the goal. At one point, peo­ple thought it was very im­por­tant that so­ciety have a shared goal, and worked hard to make it ex­pan­sive; things like “free­dom of re­li­gion” are the things civ­i­liza­tion figured out to both have nar­row shared goals (like “keep the peace”) and not ex­pan­sive shared goals (like “as many get to Catholic Heaven as pos­si­ble”). It is un­clear to me whether we’re bet­ter off with moral un­cer­tainty as gen­er­a­tor for “nar­row shared goals”, whether nar­row shared goals is what we should be go­ing for.

• It seems like teach­ing that fact, and in­still­ing moral un­cer­tainty in gen­eral into children

I would guess that teach­ing that fact is not enough to in­still moral un­cer­tainty. And that in­still­ing moral un­cer­tainty would be very hard.

• Often ex­press­ing any un­der­stand­ing to­wards the mo­tives of a “bad guy” is taken as sig­nal­ing ac­cep­tance for their ac­tions. There was e.g. con­tro­versy around the movie Down­fall for this:

Down­fall was the sub­ject of dis­pute by crit­ics and au­di­ences in Ger­many be­fore and af­ter its re­lease, with many con­cerned of Hitler’s role in the film as a hu­man be­ing with emo­tions in spite of his ac­tions and ide­olo­gies.[40][30][49] The por­trayal sparked de­bate in Ger­many due to pub­lic­ity from com­men­ta­tors, film mag­a­z­ines, and news­pa­pers,[25][50] lead­ing the Ger­man tabloid Bild to ask the ques­tion, “Are we al­lowed to show the mon­ster as a hu­man be­ing?”.[25]
It was crit­i­cized for its scenes in­volv­ing the mem­bers of the Nazi party,[23] with au­thor Giles MacDonogh crit­i­ciz­ing the por­tray­als as be­ing sym­pa­thetic to­wards SS officers Wilhelm Mohnke and Ernst-Gün­ther Schenck,[51] the former of whom was ac­cused of mur­der­ing a group of Bri­tish pris­on­ers of war in the Wormhoudt mas­sacre.[N 1]
• Wouldn’t more moral un­cer­tainty make peo­ple less cer­tain that Com­mu­nism or Nazism were wrong?

• That’s definitely how it was taught in my high school, so it’s not un­known.

• Did it make you or your class­mates doubt your own moral­ity a bit? If not, maybe it needs to be taught along with the out­side view and/​or the teacher needs to ex­plic­itly talk about how the les­son from his­tory is that we shouldn’t be so cer­tain about our moral­ity...

• We want to teach chil­dren to ac­cept the norms of our so­ciety and the nar­ra­tive we tell about it. A lot of what we teach is es­sen­tial pro-sys­tem pro­pa­ganda.

Teach­ing moral un­cer­tainty doesn’t help with that and it also doesn’t help with get­ting stu­dents to score bet­ter on stan­dard­ized tests which was the main goal of ed­u­ca­tional re­forms of the last decades.

• Com­pul­sory ed­u­ca­tion is an or­gan of the state. Na­tion-states evolve to per­pet­u­ate their own ex­is­tence. Teach­ing moral un­cer­tainty is counter-pro­duc­tive to­ward main­tain­ing the norms of a na­tion-state.

• I guess it’s be­cause high-con­vic­tion ide­olo­gies out­perform low-con­vic­tion ones, in­clud­ing na­tion­al­is­tic and poli­ti­cal ide­olo­gies, and re­li­gions. Den­nett’s Gold Army/​Silver Army anal­ogy ex­plains how con­vic­tion can build loy­atly and strength, but a similar thing is prob­a­bly true for move­ment-builders. Also, con­vic­tion might make ad­her­ents feel bet­ter, and there­fore sim­ply be more at­trac­tive.

• If I had to guess, I’d guess the an­swer is some com­bi­na­tion of “most peo­ple haven’t re­al­ized this” and “of those who have re­al­ized it, they don’t want to be seen as sym­pa­thetic to the bad guys”.

• The full-text ver­sion of the Embed­ded Agency se­quence has col­ors! And it’s not just in the form of an image, but they’re ac­tu­ally em­bed­ded as text. Is there any way a nor­mal LW user can do the same with any of the three ed­i­tors? (I.e., LW docs, Draft-JS, or Mark­down.)

• Alas, not. The rea­son is a bit silly. I can en­able text-col­ors in our ed­i­tor, but this has the un­in­tended side-effect of now copy­ing over the text-color from wher­ever you are copy­ing your text from, even the shade of black that that other pro­gram uses, which is hard to spot, but ends up look­ing kind of un­set­tling on LessWrong. Since the vast ma­jor­ity of posts are just writ­ten in nor­mal “black-or-grey on white” text col­ors, the cost of that seemed larger than the abil­ity to al­low peo­ple to use col­ored text.

Even­tu­ally we could prob­a­bly do some­thing clever, like fil­ter­ing out grey shades of text when you copy-paste it into the ed­i­tor, but I haven’t got­ten around to that, though PRs are always wel­come.

• Ap­par­ently OpenAI has sold Microsoft some sort of ex­clu­sive li­cence to GPT-3. I as­sume this is bad for the prospects of any­one else do­ing se­ri­ous re­search on it.

• Is there visi­ble re­port­ing on this?

• I re­cently re­al­ized that I’ve been con­fused about an ex­tremely ba­sic con­cept: the differ­ence be­tween an Or­a­cle and an au­tonomous agent.

This feels ob­vi­ous in some sense. But ac­tu­ally, you can ‘get’ to any AI sys­tem via out­put be­hav­ior + robotics. If you can an­swer ar­bi­trary ques­tions, you can also an­swer the ques­tion ‘what’s the next move in this MDP’, or less ab­stractly, ‘what’s the next steer­ing ac­tion of the imag­i­nary wheel’ (for a self-driv­ing car). And the differ­ence can’t be ‘an au­tonomous agent has a robotic com­po­nent’.

The es­sen­tial differ­ence seems to be that the former sys­tem only uses its out­put chan­nels when­ever it is probed, whereas the sec­ond uses them au­tonomously. But I don’t ever hear peo­ple make this dis­tinc­tion. I think part of the rea­son why I hadn’t in­ter­nal­ized this as an axis be­fore is that there is the agent vs. nona­gent thing, but ac­tu­ally, those are or­thog­o­nal to each other. We clearly can have any of the four com­bi­na­tions of {agent, nona­gent} {au­tonomous, non-au­tonomous}.[1]

It’s a pretty bad sign that I don’t know with­out look­ing at the defi­ni­tion whether ‘tool AI’ refers to the en­tire bot­tom half or just the bot­tom-left quad­rant. With look­ing, it seems to be just the lat­ter.

What led me to this was think­ing about Cor­rigi­bil­ity. I think it is ap­pli­ca­ble to the en­tire top half, all agent-like sys­tems, but it feels like a stronger re­quire­ment for the top right, au­tonomous agents. If you have an or­a­cle, then cor­rigi­bil­ity seems to re­duce to ‘don’t try to in­fluence user’s be­hav­ior through your an­swers’.

When I look at this, I am con­vinced by the ar­gu­ments that we prob­a­bly can’t just build Tool AI, but I su­per want the most pow­er­ful sys­tems of the fu­ture be non-au­tonomous. That just seems to be way safer with­out sac­ri­fic­ing a lot of perfor­mance. I think be­cause of this, I’ve been think­ing of IDA as try­ing to build non-au­tonomous sys­tems (ba­si­cally or­a­cles), even though the se­quence pretty clearly seems to have au­tonomous sys­tems in mind.[2] On the other hand, De­bate seems to be pri­mar­ily aimed at non-au­tonomous sys­tems, which (if true) is an in­ter­est­ing differ­ence.

So is all of this just news to me, and ac­tu­ally ev­ery­one is aware of this dis­tinc­tion?

1. And if you added a third axis for ‘robotic/​non-robotic’, we would end up with ex­am­ples in all eight ar­eas. ↩︎

2. I award my­self an F- for do­ing this. ↩︎

• Two ex­ist­ing sug­ges­tions for how to avoid ex­is­ten­tial risk nat­u­rally fall out of this fram­ing.

1. Go all the way to the left (even fur­ther than the pic­ture im­plies) by giv­ing the AI no out­put chan­nels what­so­ever. This is Micro­scope AI.

2. Go all the way to the bot­tom and avoid all agent-like sys­tems, but al­low au­tonomous sys­tems like self-driv­ing cars. This is (as I un­der­stand it) Com­pre­hen­sive AI Ser­vices (CAIS).

• I’m go­ing on a 30-hour road­trip this week­end, and I’m look­ing for math/​sci­ence/​hard sci-fi/​world-mod­el­ling Audible recom­men­da­tions. Any­one have any­thing?

• Golden raises \$14.5M. I wrote about Golden here as an ex­am­ple of the most com­mon startup failure mode: lack­ing a sin­gle well-formed use case. I’m con­fused about why some­one as savvy as Mark An­dreessen is tripling down and join­ing their board. I think he’s mak­ing a mis­take.

• If any­one hap­pens to be will­ing to pri­vately dis­cuss some po­ten­tially in­fo­haz­ardous stuff that’s been on my mind (and not in a good way) in­volv­ing acausal trade, I’d ap­pre­ci­ate it—PM me. It’d be nice if I can figure out whether I’m go­ing bat­shit.

• So which simu­lacrum level are ants on when they are end­lessly fol­low­ing each other in a cir­cle?

• Do those of you who live in Amer­ica fear the sce­nar­ios dis­cussed here? (“What If Trump Loses And Won’t Leave?”)

• I do, at least. I don’t think “What if trump loses and wont’ leave” is the best sum­mary of my con­cern; the best sum­mary is “What if the elec­tion is heav­ily dis­puted.”

• “What if Trump Loses...” is just the ti­tle of the ar­ti­cle, but the ar­ti­cle also dis­cusses sce­nar­ios where “Bi­den might be the one who dis­putes the re­sult”.

• I do not know whether this has already been men­tioned on Less­wrong, but 4-6 weeks ago you could read in Ger­man news web­sites that com­mer­cially available mouth wash has been tested to kill coro­n­avirus in the lab and the (pos­i­tive) re­sults have been pub­lished in Jour­nal of In­fec­tious Diseases.

You can click through this ar­ti­cle to see the ranked names of the mouth wash brands and their “re­duc­tion fac­tor” though I found the sam­ple sizes seemed quite small. You can also find a list in this overview ar­ti­cle. In an ar­ti­cle I saw to­day on this topic, the au­thor warned against us­ing the stuff per­ma­nently be­cause it also kills the de­sir­able part of your oral flora. But it was sug­gested that it may help once you are in­fected, and may pos­si­bly help pro­phy­lac­ti­cally (of course only in the sense of helping when you are pos­si­bly in­fected).

• I’m so bored of my job, I need a pro­gram­ming job that has ac­tual math/​al­gorithms :/​ I’m cu­ri­ous to hear about peo­ple here who have pro­gram­ming jobs that are more in­ter­est­ing. In col­lege I com­peted at a high level in ICPC, but I got into my head that there are so few pro­gram­ming jobs with ac­tual ad­vanced al­gorithms that if your name on top­coder isn’t red you might as well for­get about it. I ended up just tak­ing a bor­ing job at a top tech com­pany that pays well but does very lit­tle for so­ciety and is not in­tel­lec­tu­ally stim­u­lat­ing at all.

• Have you read https://​​www.benkuhn.net/​​hard/​​ ? Cu­ri­ous what you think. (Dis­clo­sure: I started the com­pany that Ben works for, which does not have hard eng prob­lems but does have a high po­ten­tial for so­cial im­pact)

• I feel happy pul­ling up kat­tis and do­ing some al­gorithm ques­tions so there is definitely joy to be had chas­ing tech­ni­cal ques­tions. Ben doesn’t seem to be dis­put­ing that but is offer­ing two other things you can chase.

Rather than com­pet­ing for an A+ on a hard prob­lem, I could try to solve an easy prob­lem as quickly as possible

I don’t know if this is differ­ent per­son to per­son but for me gam­ify­ing a prob­lem can make me care more about some­thing but it can’t make me care about some­thing I don’t care about at all

So don’t look for hard prob­lems—im­por­tant ones are ul­ti­mately more fun!

This has been in my head for months be­cause ev­ery­one* gives a vari­a­tion of this ad­vice and it feels like it’s miss­ing the hard part. It started when I saw a clip on Red­dit of Dr. K from Healthy Gamer say­ing some­thing along the lines of “If you don’t know what you want to do, get a piece of pa­per and write down ev­ery­thing wrong with the world. In 5 min­utes the pa­per will be al­most full” and… What? No? I mean, things are prob­lems in that they make peo­ple’s lives worse. But I no­tice very very lit­tle ac­tu­ally changes how I feel. So why would I ex­pect any­thing I do to change how some­one else feels if noth­ing they do can change how I feel? There are only two axis that ac­tu­ally change how I feel about life: lonely VS be­long­ing and bored VS en­gaged. I don’t re­ally have a rea­son to ex­pect other peo­ple are very differ­ent ex­cept that peo­ple in worse life situ­a­tions also have an un­safe VS se­cure axis. So the prob­lems are “loneli­ness” and “listless­ness”. Every­one acts like there are im­por­tant prob­lems ev­ery­where. You see peo­ple say­ing ideas for side pro­jects are a dime a dozen but here I am where I ac­tu­ally have the funds to quit and make some­thing I thought had value and just noth­ing I can think of that seems to have any value.

*Every­one ex­cept one friend on Paxil who as­sures me the solu­tion to my prob­lem is Paxil and one friend who is con­vinced LSD is the solu­tion to all prob­lems. I re­main un­con­vinced.

• Quan­ti­ta­tive fi­nance has use for peo­ple who know ad­vanced math and al­gorithms. (Though they are not known for do­ing great good for so­ciety.)

• You can also get around this prob­lem by start­ing your own ML startup. (I did this.) The startup route takes work and risk tol­er­ance but pro­vides high pos­i­tive ex­ter­nal­ities for so­ciety.