# Open & Welcome Thread—June 2020

If it’s worth say­ing, but not worth its own post, here’s a place to put it. (You can also make a short­form post)

And, if you are new to LessWrong, here’s the place to in­tro­duce your­self. Per­sonal sto­ries, anec­dotes, or just gen­eral com­ments on how you found us and what you hope to get from the site and com­mu­nity are wel­come.

If you want to ex­plore the com­mu­nity more, I recom­mend read­ing the Library, check­ing re­cent Cu­rated posts, see­ing if there are any mee­tups in your area, and check­ing out the Get­ting Started sec­tion of the LessWrong FAQ.

The Open Thread se­quence is here.

• How much ri­ot­ing is ac­tu­ally go­ing on in the US right now?

If you trust leftist (i.e. most US) me­dia, the an­swer is “al­most none, vir­tu­ally all protest­ing has been peace­ful, noth­ing to see here, in fact how dare you even ask the ques­tion, that sounds sus­pi­ciously like some­thing a racist would ask”.

If you take a look on the con­ser­va­tive side of the veil, the an­swer is “RIOTERS EVERYWHERE! MINNEAPOLIS IS IN FLAMES! MANHATTEN IS LOST! TAKE YOUR KIDS AND RUN!”

So...how much ri­ot­ing has there ac­tu­ally been? How much dam­age (very roughly)? How many deaths? Are there es­ti­mates of the num­ber of ri­ot­ers vs peace­ful protesters?

(I haven’t put much effort into ac­tu­ally try­ing to an­swer these ques­tions, so no-one should feel much obli­ga­tion to make the effort for me, but if some­one already knows some of these an­swers, that would be cool.)

• I don’t know much about na­tion­ally, but I know that lo­cally there’s been none to in­de­tectable ri­ot­ing, some amount of loot­ing from op­por­tunis­tic crim­i­nals /​ idiot teenagers (say, 1 per 700 protesters) and a less than ex­pected but still some cop/​protester vi­o­lence that could look like ri­ot­ing if you squint.

• Hi, I joined be­cause I was try­ing to un­der­stand Pas­cal’s Wager, and some­one sug­gested I look up “Pas­cal’s mug­ging”… next thing I know I’m a newly minted HPMOR su­per­fan, and halfway through read­ing ev­ery post Yud­kowsky has ever writ­ten. This place is an in­cred­ible wellspring of knowl­edge, and I look for­ward to join­ing in the dis­cus­sion!

• Wel­come Yitz!

• LessWrong warned me two months be­fore it oc­curred here. The sug­gested pre­pared­ness was clear and con­cise, and I felt the power on my hands. I had valuable info no one on my tribe had. I alarmed my mom and she listened me, stayed home and safe, when ev­ery­one was par­ty­ing out (car­ni­val). I long-talked with friends, and ex­plained to them what I be­lieved it was hap­pen­ing and why I be­lieved that. I showed the num­bers, the math, the pre­dic­tions to the next week, next week came, and re­al­ity pre­sented its metal­lic taste. Week af­ter week, the light was get­ting brighter and brigher un­til it turned re­ally hard to re­fuse to see it, or to be­lieve on the be­lief that ev­ery­thing was just fine.

One thing I learned is that it doesn’t mat­ter if you just know some­thing re­ally valuable, but can’t con­vince those that do mat­ter for you. I tried to ex­plain my 50 years ex­pe­rienced physi­cian father that he should listen to me. He blamed my low sta­tus. But even af­ter weeks, po­lice at the streets forc­ing cit­i­zens to stay at home, he could not be­lieve. He was in de­nial and my in­com­pe­tence to change his mind made him to the hos­pi­tal, 16 days and he isn’t still back. Don’t worry, he is get­ting bet­ter and I am just bab­bling around. Father of a cousin died. Brother tested pos­i­tive. Step­mother ob­vi­ously got it too, but tested nega­tive. You know, I re­ally tried, not just tried to try, not even planned to try, I re­ally did it right way. Suc­ces­sive failures. It wasn’t enough. I don’t have any­thing valuable to share more than ‘you have to learn ways of con­vinc­ing your most loved ones’ ur­gently, if you don’t have this tool, but I don’t know how to do it yet, I am strug­gling to find a way, and I would ask you to share when you get one. Things as sim­ple as “there is food over there” or “there is a lion com­ing to you”, on level 1 talk. Maybe the dark arts could have helped me when level 1 failed, but not sure. But I feel very happy for a lot of peers I helped along the way, and all is due to LessWrong, I am thank­ful for this com­mu­nity, and I changed the be­hav­ior of some peo­ple I know on the right mo­ment of the out­break. This sim­ple text fo­rum saves many lives and I am on the path to con­tribute too on larger scale.

I know I am not that new on the fo­rum, I don’t re­mem­ber ex­actly when I started here, but I be­lieve it was on the last months of the last year, but I still think I am noob, but learn­ing. Don’t up­vote this com­ment, but do com­ment if you wanna say some­thing.

• I’m glad you’re try­ing, and am sorry to hear it is so hard; that sounds re­ally hard. You might try the book “How to have Im­pos­si­ble Con­ver­sa­tions.” I don’t en­dorse ev­ery bit of it, but there’s some good stuff in there IMO, or at least I got mileage from it.

1. Peo­ple I fol­lowed on Twit­ter for their cred­ible takes on COVID-19 now sound in­sane. Sigh...

2. I feel like I should do some­thing to prep (e.g., hedge risk to me and my fam­ily) in ad­vance of AI risk be­ing poli­ti­cized, but I’m not sure what. Ob­vi­ous idea is to stop writ­ing un­der my real name, but cost/​benefit doesn’t seem worth it.

• Re hedg­ing, a com­mon tech­nique is hav­ing mul­ti­ple fairly differ­ent cit­i­zen­ships and for­eign-held as­sets, i.e. such that if your coun­try be­come dan­ger­ously op­pres­sive you or your as­sets wouldn’t be handed back to it. E.g. many Chi­nese elites pick up a Western cit­i­zen­ship for them or their chil­dren, and wealthy peo­ple fear­ing change in the US some­times pick up New Zealand or Sin­ga­pore homes and cit­i­zen­ship.

There are many coun­tries with schemes to sell cit­i­zen­ship, al­though of­ten you need to live in them for some years af­ter you make your in­vest­ment. Then em­i­grate if things are start­ing to look too scary be­fore em­i­gra­tion is re­stricted.

My sense, how­ever, is that the cur­rent risk of need­ing this is very low in the US, and the most likely rea­son for some­one with the means to buy cit­i­zen­ship to leave would just be in­creases in wealth/​in­vest­ment taxes through the or­di­nary poli­ti­cal pro­cess, with ex­tremely low chance of a sur­prise cul­tural rev­olu­tion (with large swathes of the pop­u­la­tion im­pris­oned, ex­pro­pri­ated or kil­led for claimed ide­olog­i­cal offenses) or ban on em­i­gra­tion. If you take enough pre­cau­tions to deal with changes in tax law I think you’ll be tak­ing more than you need to deal with the much less likely cul­tural rev­olu­tion story.

• I was ini­tially pretty ex­cited about the idea of get­ting an­other pass­port, but on sec­ond thought I’m not sure it’s worth the sub­stan­tial costs in­volved. To­day peo­ple aren’t los­ing their pass­ports or hav­ing their move­ments re­stricted for (them or their fam­ily mem­bers) hav­ing ex­pressed “wrong” ideas, but just(!) los­ing their jobs, be­ing pub­li­cly hu­mil­i­ated, etc. This is more the kind of risk I want to hedge against (with re­gard to AI), es­pe­cially for my fam­ily. If the poli­ti­cal situ­a­tion de­te­ri­o­rates even fur­ther to where the US gov­ern­ment puts offi­cial sanc­tions on peo­ple like me, hu­man­ity is prob­a­bly just to­tally screwed as a whole and hav­ing an­other pass­port isn’t go­ing to help me that much.

• I spent some time read­ing about the situ­a­tion in Venezuela, and from what I re­mem­ber, a big rea­son peo­ple are stuck there is sim­ply that the bu­reau­cracy for pro­cess­ing pass­ports is ex­tremely slow/​dys­func­tional (and lack of a pass­port pre­sents a bar­rier for achiev­ing a le­gal im­mi­gra­tion sta­tus in any other coun­try). So it might be worth­while to re­new your pass­port more reg­u­larly than is strictly nec­es­sary, so you always have at least a 5 year buffer on it say, in case we see the same kind of in­sti­tu­tional dys­func­tion. (Much less effort than ac­quiring a sec­ond pass­port.)

Side note: I once talked to some­one who be­came stuck in a coun­try that he was not a cit­i­zen of be­cause he al­lowed his pass­port to ex­pire and couldn’t travel back home to get it re­newed. (He was from a small coun­try. My guess is that the US offers pass­port ser­vices with­out need­ing to travel back home. But I could be wrong.)

• Per­ma­nent res­i­dency (as op­posed to cit­i­zen­ship) is a bud­get op­tion. For ex­am­ple, for Panama, I be­lieve if you’re a cit­i­zen of one of 50 na­tions on their “Friendly Na­tions” list, you can ob­tain per­ma­nent res­i­dency by de­posit­ing $10K in a Pana­ma­nian bank ac­count. If I re­call cor­rectly, Paraguay’s per­ma­nent res­i­dency has similar pre­req­ui­sites ($5K de­posit re­quired) and is the eas­iest to main­tain—you just need to be vis­it­ing the coun­try ev­ery 3 years.

• Peo­ple I fol­lowed on Twit­ter for their cred­ible takes on COVID-19 now sound in­sane. Sigh...

Are you say­ing that you ini­tially fol­lowed peo­ple for their good thoughts on COVID-19, but (a) now they switched to talk­ing about other top­ics (Ge­orge Floyd protests?), and their thoughts are much worse on these other top­ics, (b) their thoughts on COVID-19 be­came worse over time, (c) they made some COVID-19-re­lated pre­dic­tions/​state­ments that now look ob­vi­ously wrong, so that what they pre­vi­ously said sounds ob­vi­ously wrong, or (d) some­thing else?

• You’ll have to in­fer it from the fact that I didn’t ex­plain more and am not giv­ing a straight an­swer now. Maybe I’m be­ing overly cau­tious, but my par­ents and other rel­a­tives lived through (and suffered in) the Cul­tural Revolu­tion and other “poli­ti­cal move­ments”, and wouldn’t it be silly if I failed to “ex­pect the Span­ish In­qui­si­tion” de­spite that?

• It’s helpful (to me, in un­der­stand­ing the types of con­cerns you’re hav­ing) to have men­tioned the Cul­tural Revolu­tion. For this, post­ing un­der a pseudonym prob­a­bly doesn’t help—the groups who fo­cus on con­trol rather than thriv­ing have very good data col­lec­tion and pro­cess­ing ca­pa­bil­ity, and that’s go­ing to leak to any­one who gets suffi­cient power with them. True anonymity is gone for­ever, ex­cept by ac­tu­ally be­ing unim­por­tant to the new au­thor­i­ties/​mobs.

I wasn’t there, but I had neigh­bors grow­ing up who’d nar­rowly es­caped and who had friends/​rel­a­tives kil­led. Also, a num­ber of friends who re­layed fam­ily sto­ries from the Nazi Holo­caust. The les­son I take is that it takes off quickly, but not quite overnight. There were multi-month win­dows in both cases where things were locked down, but still porous for those lucky enough to have planned for it, or with as­sets not yet con­fis­cated, or will­ing to make sac­ri­fices and take large risks to get out. I sus­pect those who want to con­trol us have _ALSO_ learned this les­son, and the next time will have a smaller win­dow—per­haps as lit­tle as a week. Or per­haps I’m un­der­es­ti­mat­ing the slope and it’s already too late.

My ad­vice is ba­si­cally a bar­bell strat­egy for life. Get your exit plan ready to ex­e­cute on very short no­tice, and un­der­stand that it’ll be costly if you do it. Set ob­jec­tive thresh­olds for trig­gers that you just go with­out fur­ther anal­y­sis, and also do pe­ri­odic gut checks to de­cide to go for trig­gers you hadn’t con­sid­ered in ad­vance. HOWEVER, most of your time and en­ergy should go to the more likely situ­a­tion where you don’t have to ditch. Con­tinue build­ing re­la­tion­ships and per­sonal cap­i­tal where you think it’s best for your over­all goals. Do what you can to keep your lo­cal en­vi­ron­ment sane, so you don’t have to run, and so the world gets back onto a pos­i­tive trend.

• Get your exit plan ready to ex­e­cute on very short no­tice, and un­der­stand that it’ll be costly if you do it.

What would be a good exit plan? If you’ve thought about this, can you share your plan and/​or dis­cuss (pri­vately) my spe­cific situ­a­tion?

Do what you can to keep your lo­cal en­vi­ron­ment sane, so you don’t have to run, and so the world gets back onto a pos­i­tive trend.

How? I’ve tried to do this a bit, but it takes a huge amount of time, effort, and per­sonal risk, and what­ever gains I man­age to eek out seem to be highly ephemeral at best. It doesn’t seem like a very good use of my time when I can spend it on some­thing like AI safety in­stead. Have you been do­ing this your­self, and if so what has been your ex­pe­rience?

• I do not in­tend to claim that I’m par­tic­u­larly great at this, and I cer­tainly don’t think I have suffi­cient spe­cial knowl­edge for 1-1 plan­ning. I’m happy to listen and make lightweight com­ments if you think it’d be helpful.

What would be a good exit plan

My plans are half-formed, and in­clude main­tain­ing some foun­da­tional ca­pa­bil­ities that will help in a large class of dis­asters that re­quire travel. I have bank ac­counts in two na­tions and cur­ren­cies, and I keep some cash in a num­ber of cur­ren­cies. Some phys­i­cal pre­cious met­als or hard-to-con­fis­cate digi­tal cur­rency is a good idea too. I have friends and cowork­ers in a num­ber of coun­tries (in­clud­ing over a bor­der I can cross by land), who I visit enough that it will seem perfectly nor­mal for me to want to travel there. I’m se­ri­ously con­sid­er­ing real-es­tate in­vest­ments in one or two of those places, to make it even eas­ier to jus­tify travel if it be­comes re­stricted or sus­pi­cious.

I still think that the like­li­hood is low that I’ll need to go, but there may come a point where the tac­tic of main­tain­ing rol­ling re­fund­able tick­ets be­comes rea­son­able—buy a flight out at 2 weeks and 4 weeks, and ev­ery 2 weeks can­cel the near one and buy a re­place­ment fur­ther one.

Do what you can to keep your lo­cal en­vi­ron­ment sane .
How?

This is harder to ad­vise. I’m older than most peo­ple on LW, and have been build­ing soft­ware and sav­ing/​in­vest­ing for decades, so I have re­sources that can help sup­port what seem to be im­por­tant causes, and I have a job that has (in­di­rect, but clear) im­pact on keep­ing the econ­omy and so­ciety run­ning.

I also sup­port and par­ti­ci­pate in protests and visi­bil­ity cam­paigns to try to make it clear to the less-fore­sight­ful mem­bers of so­ciety that tight­en­ing con­trol isn’t go­ing to work. This part is more per­sonal, less clearly im­pact­ful to­ward my goals, and takes a huge amount of time, effort, and per­sonal risk. It’s quite pos­si­ble that I’m do­ing it more for the so­cial con­nec­tions with friends and peers than for purely ra­tio­nal goal-seek­ing. I wouldn’t fault any­one for prefer­ring to put their effort (which will ALSO take a huge amount of time, effort, and risk (though maybe less short-term physcial risk); ev­ery­thing worth­while does) into other parts of the large and mul­ti­di­men­sional risk-space.

• What would be a good exit plan? If you’ve thought about this, can you share your plan and/​or dis­cuss (pri­vately) my spe­cific situ­a­tion?′

+1 for this. Would love to talk to other peo­ple se­ri­ously con­sid­er­ing exit. Maybe we could start a Tele­gram or some­thing.

• I also have started mak­ing plans and would love to hear what oth­ers are think­ing.

• I am get­ting wor­ried about some­thing like a cul­tural rev­olu­tion here, too. Am con­sid­er­ing prep­ping for leav­ing the coun­try. This re­ally could go un­com­fortably far. The “Span­ish In­qui­si­tion” with mod­ern surveillance and sty­lom­e­try is a ter­rify­ing thought. Strug­gle ses­sions con­dens­ing out of Twit­ter into re­al­ity, state sub­si­dized elite over­pro­duc­tion for the last few decades com­bined with a sig­nifi­cant por­tion of youth­ful elites in mas­sive debt, this am­plified by de­mo­graphic changes fur­ther re­strict­ing sup­ply of elite po­si­tions, Thucy­dides trap. I feel it may be pru­dent to get out now. I am not a well-trav­eled per­son. Never both­ered get­ting a pass­port. I will ap­ply for one now, but they are hard to get these days. Shul­man’s link for pur­chas­ing cit­i­zen­ship is an in­ter­est­ing hedge.

For some­one like your­self, do­ing vi­tal work, such hedges are even more im­por­tant.

• Peo­ple I fol­lowed on Twit­ter for their cred­ible takes on COVID-19 now sound in­sane. Sigh...

Are they still good sources on biol­ogy?

• I don’t know the US situ­a­tion first­hand, but it seems like it could get worse to­ward the elec­tion. Maybe move to Europe?

• What/​who does #1 re­fer to? I’ve changed my mind a lot due to read­ing tweets from peo­ple I ini­tially fol­lowed due to their cred­ible COVID-19 takes, and you say­ing they sound in­sane would be a ma­jor up­date for me.

• I saw this “stopped clock” as­sump­tion catch­ing a bunch of peo­ple with COVID-19, so I wrote a quick post on why it seems un­likely to be a good strat­egy.

• Please share ideas/​ar­ti­cles/​re­sources for im­mu­niz­ing ones’ kids against mind viruses.

I think I was lucky my­self in that I was par­tially in­doc­tri­nated in Com­mu­nist China, then moved to the US be­fore mid­dle school, which made it hard for me to strongly be­lieve any par­tic­u­lar re­li­gion or ide­ol­ogy. Plus the US schools I went to didn’t seem to em­pha­size ide­olog­i­cal in­doc­tri­na­tion as much as schools cur­rently do. Plus there was no so­cial me­dia push­ing stu­dents to ex­press the same be­liefs as their class­mates.

What can I do to help pre­pare my kids? (If you have spe­cific ideas or ad­vice, please men­tion what age or grade they are ap­pro­pri­ate for.)

• Do you think that hav­ing your kids con­sume ra­tio­nal­ist and effec­tive al­tru­ist con­tent and/​or do­ing home­school­ing/​un­school­ing are in­suffi­cient for pro­tect­ing your kids against mind viruses? If so, I want to un­der­stand why you think so (maybe you’re imag­in­ing some sort of AI-pow­ered memetic war­fare?).

Eliezer has a Face­book post where he talks about how be­ing so­cial­ized by old sci­ence fic­tion was helpful for him.

For my­self, I think the biggest fac­tors that helped me be­come/​stay sane were spend­ing a lot of time on the in­ter­net (which led to me dis­cov­er­ing LessWrong, effec­tive al­tru­ism, Cog­nito Men­tor­ing) and not talk­ing to other kids (I didn’t have any friends from US pub­lic school dur­ing grades 4 to 11).

• Do you think that hav­ing your kids con­sume ra­tio­nal­ist and effec­tive al­tru­ist con­tent and/​or do­ing home­school­ing/​un­school­ing are in­suffi­cient for pro­tect­ing your kids against mind viruses?

Homeschool­ing takes up too much of my time and I don’t think I’m very good at be­ing a teacher (hav­ing been forced to try it dur­ing the cur­rent school clo­sure). Un­school­ing seems too risky. (Maybe it would pro­duce great re­sults, but my wife would kill me if it doesn’t. :) “Con­sume ra­tio­nal­ist and effec­tive al­tru­ist con­tent” makes sense but some more spe­cific ad­vice would be helpful, like what ma­te­rial to in­tro­duce, when, and how to en­courage their in­ter­est if they’re not im­me­di­ately in­ter­ested. Have any par­ents done this and can share their ex­pe­rience?

and not talk­ing to other kids (I didn’t have any friends from US pub­lic school dur­ing grades 4 to 11)

Yeah that might have been a con­tribut­ing fac­tor for my­self as well, but my kids seem a lot more so­cial than me.

• “Con­sume ra­tio­nal­ist and effec­tive al­tru­ist con­tent” makes sense but some more spe­cific ad­vice would be helpful, like what ma­te­rial to in­tro­duce, when, and how to en­courage their in­ter­est if they’re not im­me­di­ately in­ter­ested. Have any par­ents done this and can share their ex­pe­rience?

I don’t have kids (yet) and I’m plan­ning to de­lay any po­ten­tial de­tailed re­search un­til I do have kids, so I don’t have spe­cific ad­vice. You could talk to James Miller and his son. Bryan Ca­plan seems to also be do­ing well in terms of keep­ing his sons’ views similar to his own; he does home­school, but maybe you could learn some­thing from look­ing at what he does any­way. There are a few other ra­tio­nal­ist par­ents, but I haven’t seen any de­tailed info on what they do in terms of in­tro­duc­ing ra­tio­nal­ity/​EA stuff. Dun­can Sa­bien has also thought a lot about teach­ing chil­dren, in­clud­ing de­sign­ing a ra­tio­nal­ity camp for kids.

I can also give my own data point: Be­fore dis­cov­er­ing LessWrong (age 13-15?), I con­sumed a bunch of tra­di­tional ra­tio­nal­ity con­tent like Feyn­man, pop­u­lar sci­ence, on­line philos­o­phy lec­tures, and lower qual­ity on­line dis­course like the xkcd fo­rums. I dis­cov­ered LessWrong when I was 14-16 (I don’t re­mem­ber the ex­act date) and read a bunch of posts in an un­struc­tured way (e.g. I think I read about half of the Se­quences but not in or­der), and con­cur­rently read things like GEB and started learn­ing how to write math­e­mat­i­cal proofs. That was enough to get me to stick around, and led to me dis­cov­er­ing EA, get­ting much deeper into ra­tio­nal­ity, AI safety, LessWron­gian philos­o­phy, etc. I feel like I could have started much ear­lier though (maybe 9-10?) and that it was only be­cause of my bad en­vi­ron­ment (in par­tic­u­lar, hav­ing no­body tell me that LessWrong/​Over­com­ing Bias ex­isted) and poor English abil­ity (I moved to the US when I was 10 and couldn’t read/​write English at the level of my peers un­til age 16 or so) that I had to start when I did.

• If you’re look­ing for a dat­a­point, I found and read this ePub of all of Eliezer’s writ­ing when I was around 13 or 14. Would read it late into the night ev­ery day (1am, 2am) on the tablet I had at the time, I think an iPhone.

Be­fore that… the first book I snuck out to buy+read was Sam Har­ris’s “Let­ter to a Chris­tian Na­tion” when I was 12-13, and I gen­er­ally found his talks and books to be re­ally ex­cit­ing and mind-ex­pand­ing.

• Open­ing the Heart of Com­pas­sion out­lines the Bud­dhist model of 6 dele­te­ri­ous con­figu­ra­tions that peo­ple tend to fall into. On top of this I would add that much of the nega­tive con­se­quences of this come from our ten­dency to­wards monism: to find one thing that works and then try to build an en­tire wor­ld­view out of it.

• Do you mean how to teach them crit­i­cal think­ing skills? Or how to get them to prize the truth over fit­ting in?

I’m go­ing to as­sume you’re not a rad­i­cal leftist. What if your 16 year old kid started shar­ing ev­ery leftist meme be­cause they’ve re­ally thought about it and think it’s true? What if they said “it doesn’t mat­ter if there’s pres­sure to hold these poli­ti­cal opinions; they’re as true as grav­ity!”

Would you count that as a suc­cess, since they’re bold enough to stand up to an au­thor­ity figure (you) to hon­estly ex­press their deeply-con­sid­ered views? Or a failure? If the lat­ter, why?

• I’m go­ing to as­sume you’re not a rad­i­cal leftist. What if your 16 year old kid started shar­ing ev­ery leftist meme be­cause they’ve re­ally thought about it and think it’s true?

I don’t think that most peo­ple who re­ally think is­sues through agree with ev­ery leftists meme and think the meme is true. Part of mod­ern leftish ide­ol­ogy is that you should say cer­tain things even when they are not true, be­cause you want to show soli­dar­ity. There’s also a be­lief that cer­tain val­ues shouldn’t be “thought through”. They are sa­cred and not sup­posed to be ques­tioned.

• It sounds like you’re set­ting the bar for epistemic hy­giene (i.e. not be­ing in­fected by a mind virus) at be­ing able to jus­tify your wor­ld­view from the ground up. Is that an iso­lated de­mand for rigor, or would you view any­one un­able to do that as an un­rea­son­able con­formist?

• I think you ig­nore that plenty of peo­ple do be­lieve in epistemics that value not en­gag­ing in crit­i­cal anal­y­sis in the sense of crit­i­cal think­ing but only in the sense of crit­i­cal the­ory.

In leftish ac­tivism peo­ple are ex­pected to be able to ap­prove at the same time of the meme “ho­mo­pho­bia should always be challenged” and “Is­lam shouldn’t be challenged”. Ex­plicit dis­cus­sions about how those val­ues should be traded of against each other are shunned be­cause they vi­o­late the un­der­ly­ing sa­cred­ness.

Fre­quently, there’s an idea that be­liefs should be based on ex­pe­rience or trust­ing peo­ple with ex­pe­rience and not based on think­ing thing things through. Valu­ing think­ing things through is not uni­ver­sal.

• I’m just not con­vinced that the rad­i­cal left has epistemic norms or value pri­ori­ties that are un­usu­ally bad. Imag­ine you were about to in­tro­duce me to five of your friends to talk poli­tics. One iden­ti­fies as a rad­i­cal leftist, one a pro­gres­sive mod­er­ate, an­other a liber­tar­ian, the fourth a con­ser­va­tive, and the fifth apoli­ti­cal. All five of them share a lot of memes on Face­book. They also each have a blog where they write about their poli­ti­cal opinions.

I would not be par­tic­u­larly sur­prised if I had a thought­ful, stim­u­lat­ing con­ver­sa­tion with any of them.

My prior is that in­tel­lec­tual pro­filing based on ide­ol­ogy isn’t a good way to pre­dict how thought­ful some­body is.

So for me, if Wei Dei Jr. turned out to be a 16 year old rad­i­cal leftist, I wouldn’t think he’s any more con­formist than if he’d turned out to be a pro­gres­sive, liber­tar­ian, con­ser­va­tive, or apoli­ti­cal.

That might just be a crux of dis­agree­ment for us based on differ­ing ex­pe­riences in in­ter­act­ing with each of these groups.

• A 16yo go­ing into the mod­ern school sys­tem and turn­ing into a rad­i­cal leftist is much more of­ten than not a failure state than a suc­cess state.

Young leftist con­formists out­num­ber the thought-out and well-rea­soned young leftists by at least 10 to 1 so that’s where our prior should be at. Hy­po­thet­i­cal Wei then has a few con­ver­sa­tions with his hy­po­thet­i­cal, rad­i­cal leftist kid and the kid rea­sons well for a 16yo. We would ex­pect a well-rea­soned leftist to rea­son well more of­ten than a con­formed leftist so that up­dates our pri­ors, but I don’t think we’d go as far as say­ing that it over­comes our origi­nal 10 to 1 prior. Well-rea­soned peo­ple only make ar­gu­ments sound well-rea­soned to oth­ers maybe 90% of the time max and even con­formists can make nice-sound­ing ar­gu­ments (for a 16yo) fairly of­ten.

Even af­ter the con­ver­sa­tions, it’s still more likely that the hy­po­thet­i­cal rad­i­cal leftist kid is a con­formist rather than well-rea­soned. If hy­po­thet­i­cal Wei had some abil­ity to de­ter­mine to a high de­gree of cer­tainty whether his kid was a con­formist or well-rea­soned then that would be a very differ­ent case and he likely wouldn’t have the con­cerns that his chil­dren will be in­doc­tri­nated that he ex­pressed in the origi­nal post.

• You’re ne­glect­ing the base rate of 16 year old con­for­mity. I think this is some pretty silly spec­u­la­tion, but let’s run with it. Isn’t the base rate for 16 year old con­for­mity at least 10 to 1? If so, a 16 year old who’s a leftist is no more likely to be a con­formist than any other.

In the end, what we’re look­ing for is a re­li­able sig­nal that, what­ever the 16 year old thinks, it’s due to their in­de­pen­dent rea­son­ing.

Widely shared rea­son­able be­liefs won’t cut it, be­cause they wouldn’t have to think it out for them­selves. Ou­tra­geous con­trar­ian views won’t work, be­cause that’s not rea­son­able.

You’d have to look for them to hold views that are both rea­son­able and con­trar­ian. So, a ge­nius. Is that a re­al­is­tic bar to di­ag­nose your kid as un­in­fected by mind viruses?

• Ide­olog­i­cal con­for­mity in the school sys­tem is not uniform. A per­son turn­ing left when ev­ery­body else is turn­ing right is much less likely to be a con­formist than some­one else turn­ing right.

ETA: Without metaphor, our pri­ors for con­formist vs. well-rea­soned is differ­ent for young rightists or non-leftists in the school sys­tem.

• Are you most con­cerned that:

1) they will be­lieve false things (which is bad for its own sake)
2) they will do harm to oth­ers due to false be­liefs
3) harm will come to them be­cause of their false be­liefs
4) they will be­come alienated from you be­cause of your dis­agree­ments with each other
5) some­thing else?

It seems like these differ­ent pos­si­bil­ities would sug­gest differ­ent miti­ga­tions. For ex­am­ple, if the threat model is that they just adopt the dom­i­nant ide­ol­ogy around them (which hap­pens to be false on many points), then that re­sults in them hav­ing false be­liefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways).

Similarly, de­pend­ing on whether you care more about #1 or #4, you may try harder to cor­rect their false ideas, or to es­tab­lish a norm for your re­la­tion­ship that it’s fine to dis­agree with each other. (Though I sus­pect that, gen­er­ally speak­ing, efforts that tend to pro­duce a healthy re­la­tion­ship will also tend to pro­duce true be­liefs, in the long run.)

• I should also ad­dress this part:

For ex­am­ple, if the threat model is that they just adopt the dom­i­nant ide­ol­ogy around them (which hap­pens to be false on many points), then that re­sults in them hav­ing false be­liefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways).

Many Com­mu­nist true be­liev­ers in China met ter­rible ends as waves of “poli­ti­cal move­ments” swept through the coun­try af­ter the CCP takeover, and pit­ted one group against an­other, all vy­ing to be the most “rev­olu­tion­ary”. (One of my great-grand­par­ents could have es­caped but stayed in China be­cause he was friends with a num­ber of high-level Com­mu­nists and be­lieved in their cause. He ended up com­mit­ting suicide when his friends lost power to other fac­tions and the gov­ern­ment turned on him.)

More gen­er­ally, ide­ol­ogy can change so quickly that it’s very difficult to fol­low it closely enough to stay safe, and even if you did fol­low the dom­i­nant ide­ol­ogy perfectly you’re still vuln­er­a­ble to the next “van­guard” who pushes the ide­ol­ogy in a new di­rec­tion in or­der to take power. I think if “adopt the dom­i­nant ide­ol­ogy” is sen­si­ble as a defen­sive strat­egy for liv­ing in some so­ciety, you’d still re­ally want to avoid get­ting in­doc­tri­nated into be­ing a true be­liever, so you can ap­ply ra­tio­nal anal­y­sis to the poli­ti­cal strug­gles that will in­evitably fol­low.

• I guess I’m wor­ried about

1. They will “waste their life”, for both the real op­por­tu­nity cost and the po­ten­tial re­gret they might feel if they re­al­ize the er­ror later in life.

2. My own re­gret in know­ing that they’ve been in­doc­tri­nated into be­liev­ing wrong things (or into hav­ing un­rea­son­able cer­tainty about po­ten­tially wrong things), when I prob­a­bly could have done some­thing to pre­vent that.

3. Their views mak­ing fam­ily life difficult. (E.g., if they were to se­cretly record fam­ily con­ver­sa­tions and post them on so­cial me­dia as ex­am­ples of wrong­think, like some kids have done.)

Can’t re­ally think of any miti­ga­tions for these aside from try­ing not to let them get in­doc­tri­nated in the first place...

• My daugh­ter is 2. Every­thing we do with her is ei­ther in­doc­tri­na­tion or play; she doesn’t have enough lan­guage yet for the learn­ing-begets-learn­ing we nat­u­rally as­sume with older kids and adults.

I was in the mil­i­tary, which is prob­a­bly the most suc­cess­ful em­ployer of in­doc­tri­na­tion in the US. I be­lieve the key to this suc­cess rests with the clar­ity of the in­doc­tri­na­tion’s pur­pose and effec­tive­ness: the pur­pose is to keep ev­ery­one on the same page, be­cause if we aren’t our peo­ple will die (where our peo­ple means the unit). In­doc­tri­na­tion is the only tool available for this be­cause there isn’t time for shar­ing all the rele­vant in­for­ma­tion or do­ing anal­y­sis.

I plan to cap­ture these benefits for my daugh­ter by be­ing spe­cific about the fact that I’m us­ing in­doc­tri­na­tion and why in­doc­tri­na­tion is a good tool for the situ­a­tion in­stead of how we think or feel about it, when she in­evitably has ques­tions.

The bear­ing I think this has on the ques­tion of mind viruses is that she will know what in­doc­tri­na­tion looks like when she sees it. Fur­ther, she will have ex­pec­ta­tions of pur­pose and im­pact; poli­ti­cal in­doc­tri­na­tion fails these tests, which I hope will trig­ger re­jec­tion (or at least fore­stall over­com­mit­ment).

• I don’t have chil­dren, and my up­bring­ing wasn’t es­pe­cially good or bad on learn­ing ra­tio­nal­ity.

Still, what I’m notic­ing in your post and the com­ments so far is the idea that ra­tio­nal­ity is some­thing to put into your chil­dren.

I be­lieve that ra­tio­nal­ity mostly needs to be mod­eled. Take your mind and your chil­dren’s con­nec­tion to the uni­verse se­ri­ously. Show them that think­ing and ar­gu­ing are both fun and use­ful.

• How are you han­dling the prob­lem that ra­tio­nal­ity will of­ten pay nega­tive if not over a crit­i­cal mass (e.g., it of­ten leads to poor sig­nal­ing or anti-sig­nal­ing if one is lucky)?

• Per­sonal up­date: Over the last few months, I’ve be­come much less wor­ried that I have a ten­dency to be too pes­simistic (be­cause I fre­quently seem to be the most pes­simistic per­son in a dis­cus­sion). Things I was wor­ried about more than oth­ers (coro­n­avirus pan­demic, epistemic con­di­tions get­ting sig­nifi­cantly worse) have come true, and when I was wrong in a pes­simistic di­rec­tion, I up­dated quickly af­ter com­ing across a good ar­gu­ment (so I think I was wrong just be­cause I didn’t think of that ar­gu­ment, rather than due to a ten­dency to be pes­simistic).

Feed­back wel­come, in case I’ve up­dated too much about this.

• Hi guys,been a long time lurker here.Wanted to ask this,have you guys ever done rereads for the Se­quences so that new guys can en­gage with he con­tent bet­ter and dis­cuss..Just a thought

• I re­call a [SEQ RERUN] in the past, yes. You are also al­lowed to com­ment on old posts. LessWrong shows “re­cent dis­cus­sion” on its front page, so these do get replies some­times. There was also talk of a book group.

• Scott’s new post on Prob­lems With Pay­walls re­minds me to men­tion the one weird trick I use to get around pay­walls. Many places like NYT will make the pay­wall ap­pear a few sec­onds af­ter land­ing on the page, so I re­li­ably hit cmd-a and cmd-c and then paste the whole post into a text ed­i­tor, and read it there in­stead of on the site. This works for the ma­jor­ity of pay­walled ar­ti­cles I en­counter per­son­ally.

• Or you can use By­pass Pay­walls with Fire­fox or Chrome.

• Ex­per­i­ment­ing with this now!

• If you use Fire­fox, there is an ex­ten­sion called Tem­po­rary Con­tain­ers. This al­lows you to load a site in a tem­po­rary con­tainer tab, which is effec­tively like open­ing the site in a fresh in­stall of a browser or on a new de­vice. For sites with rate limited pay walls like the NYT, this effec­tively defeats the pay­wall as it never ap­pears to them that you have gone over their rate limit.

The ex­ten­sion can be con­figured so that ev­ery in­stance of a par­tic­u­lar url is au­to­mat­i­cally opened in its own tem­po­rary con­tainer, which defeats these pay­walls at very lit­tle cost to con­ve­nience.

• You can of­ten find ar­ti­cles in the Way­back Ma­chine even if they’re pay­walled.

• Hello; just joined; work­ing through the Library. I ap­pre­ci­ate the com­bi­na­tion of high stan­dards and wel­com­ing tone. I’m a home­school­ing (pre-Covid-19) par­ent in the US South, so among other things I’m look­ing for­ward to find­ing thoughts here on ed­u­ca­tion for chil­dren.

I found Slate Star Codex be­fore LessWrong and hope this doxxing/​out­ing situ­a­tion works out safely.

• There are cer­tainly a lot of peo­ple here in­ter­ested in the same topic! Jeff (https://​​www.less­wrong.com/​​users/​​jkauf­man) is prob­a­bly the most pro­lific poster on rais­ing chil­dren, though his kids are still quite young. Good luck and have fun!

• Wel­come lime­stone! And feel free to leave com­ments here or ping the ad­mins on In­ter­com (the small chat bub­ble in the bot­tom right) if you run into any prob­lems!

• I no­ticed that all posts for the last day and a half are still per­sonal blog­posts, even though many are more “Front­page” kind of stuff. Is there a bug in the site, is it a new policy for what makes it to front­page, or is it just that the mod­er­a­tion team didn’t have time to go through the post?

• Thanks for com­ment­ing. So, the lat­est ad­min-UI is that we have de­cide which core tags to give a post be­fore de­cid­ing whether to front­page it, which is a triv­ial in­con­ve­nience, which leads to de­lays. At the minute I do care a fair bit about get­ting the core tags right, so I’m not sure what the best thing to do about this is.

• This seems kind of ter­rible? I ex­pect au­thors and read­ers care more about new posts be­ing pub­lished than about the tags be­ing pris­tine.

• Yeah, to be clear, I agree on both counts, see my re­ply to adam be­low about how long I think the front­page de­ci­sions should take. I do think the tags are im­por­tant so it’s been good to ex­per­i­ment with this, but it isn’t the right call to have de­lays of this length in gen­eral and I/​the team should figure out a way to pre­vent the de­lays pretty soon.

Added: Ac­tu­ally, I think that as read­ers use tags more to filter their front­page posts, it’ll be more im­por­tant to many of them that a post is filtered in/​out of their feed, than whether it was front­paged effi­ciently. But I agree that for au­thor ex­pe­rience, effi­ciency of front­page is a big deal.

• Okay, this makes sense. Per­son­ally, that’s slightly an­noy­ing be­cause this means a post I wrote yes­ter­day will prob­a­bly be lost in the burst of posts pushed to Front­page (as I as­sume it would be go­ing to Front­page), but I also value the tag sys­tem, so I can take a hit or two for that.

That be­ing said, it doesn’t seem sus­tain­able for you: the back­log keeps grow­ing, and I as­sume the de­lays will too, re­sult­ing in posts pushed to Front­page a long time af­ter they were posted.

• I just went through and tagged+front­paged the 10 out­stand­ing posts.

In gen­eral I think it’s nec­es­sary for at least 95% of posts to be front­paged-or-not within 24 hours of be­ing pub­lished, and I think we can get the me­dian to be un­der 12 hours, and po­ten­tially much faster. I don’t ac­tu­ally have a num­ber for that, maybe we should just put the av­er­age time for the past 14 days on the ad­min-UI to help us keep track.

• Thanks! And I think the de­lay you men­tion fit with my in­tu­ition about this.

• I was won­der­ing about this, too. (If the im­plicit Front­pag­ing queue is “stuck”, that gives me an in­cen­tive to de­lay pub­lish­ing my new post, so that it doesn’t have to com­pete with a big burst of back­logged posts be­ing Front­paged at the same time.)

• Hi! I’ve been read­ing LessWrong and Slate Star Codex for years, but un­til the to­day’s events com­mented pretty much ex­clu­sively on SSC. Hope ev­ery­thing will re­solve to the bet­ter, al­though per­son­ally I’m rather pes­simistic.

In any case, I’ve been won­der­ing for a while is there any on­line places for ca­sual dis­cus­sions a-la SSC Open Threads, but more closely re­lated to Less Wrong and the Bay Area ra­tio­nal­ist com­mu­nity? Threads like this are one such place ob­vi­ously, but they seem rare and un­pop­u­lated. I’ve tried to fins face­book groups, but with very limited suc­cess. Any recom­men­da­tions?

• I think var­i­ous dis­cus­sions on LessWrong are prob­a­bly your best bet. A lot of LessWrong dis­cus­sion is dis­tributed over a large num­ber of posts and plat­forms, so things end up less cen­tral­ized than for SSC stuff, which has benefits and draw­backs.

For an ex­pe­rience similar to an SSC Open Thread, I think the LW Open Threads are your best bet, though they are definitely a lot less ac­tive than the SSC ones.

• I’m sure this phe­nomenon has a name by now, but I’m strug­gling to find it. What is it called when re­quire­ments are ap­plied to an ex­cess of ap­pli­cants solely for the pur­pose of whit­tling them down to a man­age­able num­ber, but do­ing so ei­ther filters no bet­ter than chance or ac­tively elimi­nates the ideal can­di­date?

For ex­am­ple, a job may re­quire a col­lege de­gree, but its best work­ers would be those with­out one. Or an apart­ment com­plex is rude to ap­pli­cants know­ing there are an ex­cess, scar­ing off good ten­ants in fa­vor of those des­per­ate. Or some­one finds ex­cep­tional luck se­cur­ing on­line dat­ing “matches” and be­gins to fill their pro­file with re­quire­ments that put off worth­while mates.

• “throw­ing the baby out with the bath­wa­ter”?

• Con­ser­va­tion of thought, per­haps. The root prob­lem is hav­ing more op­tions than you can han­dle, prob­a­bly am­plified by bad premises. Or the other hand, if you’re swamped, when will you have time to im­prove your premises?

“Con­ser­va­tion of thought” is from an early is­sue of The New York Re­view of Science Fic­tion.

• I think some­thing like “mar­ket in­effi­ciency” might be the word. Dis­claimer—I’m not an economist and don’t know the pre­cise tech­ni­cal mean­ing of this term. But roughly speak­ing, the situ­a­tions you de­scribe seem to be those where the law of sup­ply and de­mand is some­how pre­vented from act­ing di­rectly on the mon­e­tary price, so the non-mon­e­tary “price” is in­creased/​de­creased in­stead. In the case of the apart­ments, they’d prob­a­bly be happy to in­crease the price un­til they’ve got ex­actly the right num­ber of ap­pli­cants but are kept from do­ing it by rent con­trol or rep­u­ta­tion or some­thing, so they in­cur moral costs on the ap­pli­cants. In case of hiring, they’re prob­a­bly kept from low­er­ing their wages through some com­bi­na­tion of: in­abil­ity to lower wages of the ex­ist­ing em­ploy­ees on the similar po­si­tions, wages not be­ing ex­actly pub­lic any­way, and maybe some psy­cholog­i­cal ex­pec­ta­tions where no­body with re­quired cre­den­tials will agree to work for less than X, no mat­ter how good the con­di­tions are (or al­ter­na­tively they’re gen­uinely try­ing to pick the best and failing, than it’s Good­heart’s law). And in the case of the dat­ing mar­ket there sim­ply is no uni­ver­sal cur­rency to be­gin with.

• I think you are refer­ring to Good­heart’s law, be­cause all the mea­sures your ex­am­ples used as a proxy to achieve some goal were gam­ified in a way that the proxy stopped work­ing re­li­ably.

• Hmm, this seems a lit­tle differ­ent from Good­hart’s law (or at least it’s a par­tic­u­lar spe­cial case that de­serves its own name).

This con­cept, as I un­der­stand it, is not about pick­ing the wrong met­ric to op­ti­mize. It’s more like pick­ing the wrong met­ric to satis­fice, or putting the bar for satis­fic­ing in the wrong place.

• Sorry for the out­ages to­day (we had two out­ages, one around 1:00PM PT, one around 3:30PM PT, with in­ter­mit­tent slow re­quests in the in­ter­ven­ing time). As far as I can tell it was caused by a bot that was crawl­ing par­tic­u­larly ex­pen­sive pages (pages with tons of com­ments) at a rel­a­tively high rate. We’ve banned the rele­vant IP range and ev­ery­thing ap­pears back to nor­mal, though I am still watch­ing the logs and server met­rics at­ten­tively.

Again, sorry for any in­con­ve­niences this caused, and please let us know via In­ter­com if you run into any fur­ther prob­lems.

• We might still have some prob­lems with com­ment and PM sub­mis­sions that I am look­ing into. Not sure what’s caus­ing that.

• All re­main­ing prob­lems with doc­u­ment sub­mis­sion should be re­solved. If you had opted into beta fea­tures and had trou­ble sub­mit­ting doc­u­ments in the past few hours, you should be able to do that again, and please let me know via In­ter­com if you can’t.

• Since post­ing this, I’ve re­vised my pa­per, now called “Un­bounded util­ity and ax­io­matic foun­da­tions”, and elimi­nated all the place­hold­ers mark­ing work still to be done. I be­lieve it’s now ready to send off to a jour­nal. If any­one wants to read it, and es­pe­cially if any­one wants to study it and give feed­back, just drop me a mes­sage. As a taster, here’s the in­tro­duc­tion.

Sev­eral ax­io­ma­ti­sa­tions have been given of prefer­ence among ac­tions, which all lead to the con­clu­sion that these prefer­ences are equiv­a­lent to nu­mer­i­cal com­par­i­son of a real-val­ued func­tion of these ac­tions, called a “util­ity func­tion”. Among these are those of Ram­sey [11], von Neu­mann and Mor­gen­stern [17], Nash [8], Marschak [7], and Sav­age [13, 14].
Th­ese ax­io­ma­ti­sa­tions gen­er­ally lead also to the con­clu­sion that util­ities are bounded. (An ex­cep­tion is the Jeffrey-Bolker sys­tem [6, 2], which we shall not con­sider here.) We ar­gue that this con­clu­sion is un­nat­u­ral, and that it arises from a defect shared by all of these ax­iom sys­tems in the way that they han­dle in­finite games. Tak­ing the ax­ioms pro­posed by Sav­age, we pre­sent a sim­ple mod­ifi­ca­tion to the sys­tem that ap­proaches in­finite games in a more prin­ci­pled man­ner. All mod­els of Sav­age’s ax­ioms are mod­els of the re­vised ax­ioms, but the re­vised ax­ioms ad­di­tion­ally have mod­els with un­bounded util­ity. The ar­gu­ments to bounded util­ity based on St. Peters­burg-like gam­bles do not ap­ply to the re­vised sys­tem.
• Com­ment and post text fields de­fault to “LessWrong Docs [beta]” for me, I as­sume be­cause I have “Opt into ex­per­i­men­tal fea­tures” checked in my user set­tings. I won­der if the “Ac­ti­vate Mark­down Edi­tor” set­ting should take prece­dence?—no one who prefers Mark­down over the Draft.js WYSIWYG ed­i­tor is go­ing to switch be­cause our WYSIWYG ed­i­tor is just that much bet­ter, right? (Why are you guys writ­ing an ed­i­tor, any­way? Like, it looks fun, but I don’t un­der­stand why you’d do it other than, “It looks fun!”)

• Just to clar­ify, I wouldn’t re­ally say that “we are build­ing our own ed­i­tor”. We are just cus­tomiz­ing the CKEdi­tor 5 frame­work. It is definitely a bunch of work, but we aren’t touch­ing any low-level ab­strac­tions (and we’ve spent over­all more time than that try­ing to fix bugs and in­con­sis­ten­cies in the cur­rent ed­i­tor frame­work we are us­ing, so hope­fully it will save us time in the long-run).

• Ah, yeah that makes sense, just an over­sight. I’ll try to fix that next week.

We’re us­ing CkEdi­tor5 as a base to build some new fea­tures. There are a num­ber of rea­sons for this (in the im­me­di­ate fu­ture, it means you can fi­nally have ta­bles), but the most im­por­tant (later on down the line) rea­son is that it pro­vides Google Docs style col­lab­o­ra­tive edit­ing. In ad­di­tion to be­ing a gen­er­ally nice set of fea­tures for coau­thors, I’m hop­ing that it dove­tails sig­nifi­cantly with the LW 2019 re­view in De­cem­ber, al­low­ing peo­ple to sug­gest changes for nom­i­nated posts.

• Can some­one please ex­plain what the fol­low­ing sen­tence from the terms of use means? “In sub­mit­ting User-Gen­er­ated Con­tent to the Web­site, you agree to waive all moral rights in or to your User-Gen­er­ated Con­tent across the world, whether you have or have not as­serted moral rights.”

• (We in­her­ited the terms of use from the old LessWrong so while I tried my best to un­der­stand them, I don’t have as deep of a sense of un­der­stand­ing as I wish I had, and it seemed more im­por­tant to keep the terms of use con­sis­tent than to rewrite ev­ery­thing, to main­tain con­sis­tency be­tween the agree­ments that au­thors made when they con­tributed to the site at differ­ent points in time)

The moral rights in­clude the right of at­tri­bu­tion, the right to have a work pub­lished anony­mously or pseudony­mously, and the right to the in­tegrity of the work.[1] The pre­serv­ing of the in­tegrity of the work al­lows the au­thor to ob­ject to al­ter­a­tion, dis­tor­tion, or mu­tila­tion of the work that is “prej­u­di­cial to the au­thor’s honor or rep­u­ta­tion”.[2] Any­thing else that may de­tract from the artist’s re­la­tion­ship with the work even af­ter it leaves the artist’s pos­ses­sion or own­er­ship may bring these moral rights into play. Mo­ral rights are dis­tinct from any eco­nomic rights tied to copy­rights. Even if an artist has as­signed his or her copy­right rights to a work to a third party, he or she still main­tains the moral rights to the work.[3]

What ex­actly these rights are seems to differ a bunch from coun­try to coun­try. In the U.S. the pro­tec­tion of these moral rights is pretty limited. From the same ar­ti­cle:

Mo­ral rights[17] have had a less ro­bust tra­di­tion in the United States. Copy­right law in the United States em­pha­sizes pro­tec­tion of fi­nan­cial re­ward over pro­tec­tion of cre­ative at­tri­bu­tion.[5]:xiii The ex­clu­sive rights tra­di­tion in the United States is in­con­sis­tent with the no­tion of moral rights as it was con­sti­tuted in the Civil Code tra­di­tion stem­ming from post-Revolu­tion­ary France. When the United States ac­ceded to the Berne Con­ven­tion, it stipu­lated that the Con­ven­tion’s “moral rights” pro­vi­sions were ad­dressed suffi­ciently by other statutes, such as laws cov­er­ing slan­der and li­bel.[5]

Con­crete in­stances where I can imag­ine this waiv­ing be­com­ing rele­vant, and where I think this makes sense (though this is just me guess­ing, I have not dis­cussed this in de­tail with a lawyer):

• An au­thor leaves a com­ment on a post that starts with a steel­man of an op­pos­ing po­si­tion. We dis­play a trun­cated ver­sion of the com­ment by de­fault, which now only shows them ar­gu­ing for a po­si­tion they find ab­hor­rent. This could po­ten­tially vi­o­late their moral rights by al­ter­ing their con­tri­bu­tion in a way that vi­o­lates their honor or rep­u­ta­tion.

• An au­thor leaves a com­ment and an­other user quotes a sub­sec­tion of that com­ment, bold­ing, or ital­i­ciz­ing var­i­ous sec­tions that they dis­agree with, and in­sert­ing sen­tences us­ing no­ta­tion.

• Some good news for the claim pub­lic aware­ness of X risk in gen­eral should go up af­ter coro­n­avirus—the economist cover story: https://​​www.economist.com/​​node/​​21788546?frsc=dg|e, https://​​www.economist.com/​​node/​​21788589?frsc=dg|e

• I’ve been search­ing for a LW post for half an hour. I think it was writ­ten within the last few months. It’s about how to un­der­stand be­liefs that stronger peo­ple have, with­out sim­ply defer­ring to them. It was on the front page while I was read­ing the com­ments to this post of mine, which is how I found it. Any­one know which post I’m try­ing to find?

• Hi, just as a note: https://​​www.less­wrong.com/​​al­lPosts?filter=cu­rated&view=new looks re­ally weird (which you get from googling for cu­rated posts) be­cause the short­form posts are not filtered out.

• This is a ques­tion about pri­ori­ti­za­tion and to do lists. I find that my af­fairs can be sorted into:

• ur­gent and im­por­tant (do this or else you will lose your job; the doc­tor says to do X or you will die a hor­rible death in Y days)

• This stuff re­ally needs to get done soon but the world won’t end (pay­ing bills/​deal­ing with bugs in one’s life/​fix­ing chronic health is­sues)

• Every­thing else

Due to some of the things in the 2nd cat­e­gory, I have very lit­tle time to spend on the lat­ter 2 cat­e­gories. There­fore, I find that when I have a mo­ment to sit down and try to plan the re­main­ing min­utes/​hours of the day, I keep think­ing of stuff I’ve for­got­ten. For in­stance, at T-0, I will say “I should do A, B, and C”. At T+5, I will re­mem­ber D and say “I should do D, A, and B; there is no time for C to­day”. And on it goes un­til spend­ing time on A or B seems profoundly fool­ish and doomed.

In HPMOR, the prob­lem is also pre­sented at the end of the book.

Has any­one writ­ten about deal­ing with this prob­lem?

• ″ In 2017, a fed­eral court, the U.S. South­ern District Court of New York, sided with El­se­vier and ruled Sci-Hub should stop op­er­at­ing and pay $15 mil­lion in dam­ages. In a similar law­suit, the Amer­i­can Chem­istry So­ciety won a case against Elbakyan and the right to de­mand an­other$4.8 mil­lion in dam­ages.

In ad­di­tion, both courts effec­tively pro­hibited any U.S. com­pany from fa­cil­i­tat­ing Sci-Hub’s work. Elbakyan had to mi­grate the web­site from its early .org do­main, and the U.S.-based on­line pay­ment ser­vices are no longer an op­tion for her. She can no longer use Cloud­flare, a ser­vice that pro­tects web­sites from de­nial-of-ser­vice at­tacks, she said. ”

• This clip (from The Office) re­minds me of when peo­ple sug­gest var­i­ous patches to AI af­ter it vi­o­lates some safety con­cern /​ fails an ex­per­i­ment.

• Am I the only one for whom all com­ments in the Align­ment Fo­rum have 0 votes?

• Nope, looks like a bug, will look into it.

• If you like Yud­kowskian fic­tion, Wer­tifloke = Eliezer Yudkowsky

The Waves Arisen https://​​wer­tifloke.word­press.com/​​

• Just got a Roam ac­count; is there any good re­source on how to use it? I looked into the help page, but most links don’t lead any­where. Thanks.

• I’m plan­ning to try this: https://​​learn.natelia­son.com/​​. I think that the Roam founder also recom­mends it.

• I re­cently ap­plied for a Roam ac­count. Can I ask when it was you ap­plied and how long be­fore they got back to you?

• I would say I ap­plied a cou­ple of weeks ago (maybe 3 weeks), and re­ceived an email yes­ter­day tel­ling me that ac­counts were open­ing again.

• Thanks.

• Is there a name for in­tu­ition/​fal­lacy that an ad­vanced AI or alien race must also be morally su­pe­rior?

• I think you can re­fer the per­son to or­thog­o­nal­ity thesis

• It might gen­er­ally be Mo­ral Real­ism (anti-moral-rel­a­tivism). The no­tion that moral­ity is some uni­ver­sal ob­jec­tive truth that we grad­u­ally un­cover more of as we grow wiser. That’s how those peo­ple usu­ally con­ceive it.

I some­times call it anti-or­thog­o­nal­ism.

• I want to ex­plain my down­vot­ing this post. I think you are at­tack­ing a mas­sive straw­man by equat­ing moral re­al­ism with [dis­agree­ing with the or­thog­o­nal­ity the­sis].

Mo­ral re­al­ism says that moral ques­tions have ob­jec­tive an­swers. I’m al­most cer­tain this is true. The rele­vant form of the or­thog­o­nal­ity the­sis says that there ex­ist minds such that in­tel­li­gence is in­de­pen­dent of goals. I’m al­most cer­tain this is true.

It does not say that in­tel­li­gence is or­thog­o­nal to goals for all agents. Rele­vant quote from EY:

I mean, I would po­ten­tially ob­ject a lit­tle bit to the way that Nick Bostrom took the word “or­thog­o­nal­ity” for that the­sis. I think, for ex­am­ple, that if you have hu­mans and you make the hu­man smarter, this is not or­thog­o­nal to the hu­mans’ val­ues. It is cer­tainly pos­si­ble to have agents such that as they get smarter, what they would re­port as their util­ity func­tions will change. A pa­per­clip max­i­mizer is not one of those agents, but hu­mans are.

And the wiki page Filipe March­esini linked to also gets this right:

The or­thog­o­nal­ity the­sis states that an ar­tifi­cial in­tel­li­gence have any com­bi­na­tion of in­tel­li­gence level and goal. [em­pha­sis added]

• Good com­ment, but… Have you read Three Wor­lds Col­lide? If you were in a situ­a­tion similar to what it de­scribes, would you still be call­ing your po­si­tion moral re­al­ism?

I am not out to at­tack the po­si­tion that hu­mans fun­da­men­tally, gen­er­ally al­ign with hu­mans. I don’t yet agree with it, its claim, “ev­ery moral ques­tion has a sin­gle true an­swer” might turn out to be a con­fused para­phras­ing of “ev­ery war has a vic­tor”, but I’m open to the pos­si­bil­ity that it’s mean­ingfully true as well.

• Good com­ment, but… Have you read Three Wor­lds Col­lide? If you were in a situ­a­tion similar to what it de­scribes, would you still be call­ing your po­si­tion moral re­al­ism?

Yes and yes. I got very emo­tional when read­ing that. I thought re­ject­ing the hap­piness… surgery or what­ever it wast that the ad­vanced alien species pre­scribed was blatantly in­sane.

• Seems like an ap­peal to ?false? au­thor­ity. May not be a fal­lacy be­cause there’s a demon­stra­ble trend be­tween tech­nolog­i­cal su­pe­ri­or­ity and moral su­pe­ri­or­ity at least on Earth. As­sum­ing that trend ex­tends to other civ­i­liza­tions off Earth? I’m sure there’s some­thing fal­la­cious about that, maybe too geo­cen­tric.

• Mod note: Copied over from one of Zvi’s Covid posts to keep the rele­vant thread on-topic:

[...]

Murdered

I would sim­ply like to point out here 3 things.

1. The defi­ni­tion of homi­cide from wikipe­dia “A homi­cide re­quires only a vo­li­tional act by an­other per­son that re­sults in death, and thus a homi­cide may re­sult from ac­ci­den­tal, reck­less, or neg­li­gent acts even if there is no in­tent to cause harm” Such a find­ing in an au­topsy re­port does not im­ply a crime let alone mur­der.

2. The au­topsy re­port or­dered by his fam­ily showed quan­tities of nu­mer­ous drugs in­clud­ing very sig­nifi­cant, po­ten­tially lethal, quan­tities of Fen­tanyl, a drug of­ten as­so­ci­ated with res­pi­ra­tory failure, in Ge­orge Floyd’s blood. Floyd was also pos­i­tive for Coron­avirus, which is known to im­pact heart and lung func­tion, and had heart dis­ease and var­i­ous other rele­vant med­i­cal con­di­tions. Con­sider the pos­si­bil­ity that the causal chain is less clear than might ap­pear su­perfi­cially.

3. I see at this point no court find­ing of mur­der.

OP asked for only com­ments on the CV pan­demic but I think that his in­flam­ma­tory com­ment re­quires some clar­ifi­ca­tion.

• Western civ­i­liza­tion may be en­ter­ing a new dark age.