2016 LessWrong Diaspora Survey Analysis: Part Three (Mental Health, Basilisk, Blogs and Media)

2016 LessWrong Di­as­pora Sur­vey Analysis

Overview


Men­tal Health

We de­cided to move the Men­tal Health sec­tion up closer in the sur­vey this year so that the data could in­form ac­cessibil­ity de­ci­sions.

LessWrong Men­tal Health As Com­pared To Base Rates In The Gen­eral Pop­u­la­tion
Con­di­tion Base Rate LessWrong Rate LessWrong Self dx Rate Com­bined LW Rate Base/​LW Rate Spread Rel­a­tive Risk
De­pres­sion 17% 25.37% 27.04% 52.41% +8.37 1.492
Ob­ses­sive Com­pul­sive Di­sor­der 2.3% 2.7% 5.6% 8.3% +0.4 1.173
Autism Spec­trum Di­sor­der 1.47% 8.2% 12.9% 21.1% +6.73 5.578
At­ten­tion Deficit Di­sor­der 5% 13.6% 10.4% 24% +8.6 2.719
Bipo­lar Di­sor­der 3% 2.2% 2.8% 5% −0.8 0.733
Anx­iety Di­sor­der(s) 29% 13.7% 17.4% 31.1% −15.3 0.472
Border­line Per­son­al­ity Di­sor­der 5.9% 0.6% 1.2% 1.8% −5.3 0.101
Schizophre­nia 1.1% 0.8% 0.4% 1.2% −0.3 0.727
Sub­stance Use Di­sor­der 10.6% 1.3% 3.6% 4.9% −9.3 0.122

Base rates are taken from Wikipe­dia, US rates were fa­vored over global rates where im­me­di­ately available.

Ac­cessibil­ity Suggestions

So of the con­di­tions we asked about, LessWrongers are at sig­nifi­cant ex­tra risk for three of them: Autism, ADHD, De­pres­sion.

LessWrong prob­a­bly doesn’t need to con­cern it­self with be­ing more ac­cessible to those with autism as it likely already is. De­pres­sion is a com­pli­cated di­s­or­der with no clear in­ter­ven­tions that can be eas­ily im­ple­mented as site or com­mu­nity policy. It might be helpful to en­courage look­ing more at pos­i­tive trends in ad­di­tion to nega­tive ones, but the com­mu­nity already seems to do a fairly good job of this. (We could definitely use some more of it though.)

At­ten­tion Deficit Di­sor­der—Public Ser­vice Announcement

That leaves ADHD, which we might be able to do some­thing about, start­ing with this:

A lot of LessWrong stuff ends up fal­ling into the same genre as pro­duc­tivity ad­vice or ‘self help’. If you have trou­ble with get­ting your­self to work, find your­self read­ing these things and com­pletely un­able to im­ple­ment them, it’s en­tirely pos­si­ble that you have a men­tal health con­di­tion which im­pacts your ex­ec­u­tive func­tion.

The best overview I’ve been able to find on ADD is this talk from Rus­sell Barkely.

30 Essen­tial Ideas For Parents

Iron­i­cally enough, this is a long talk, over four hours in to­tal. Barkely is an en­ter­tain­ing speaker and the talk is ab­solutely fas­ci­nat­ing. If you’re even mildly in­ter­ested in the sub­ject I whole­heart­edly recom­mend it. Many peo­ple who have ADHD just as­sume that they’re lazy, or not try­ing hard enough, or just haven’t found the ‘magic bul­let’ yet. It never even oc­curs to them that they might have it be­cause they as­sume that adult ADHD looks like child­hood ADHD, or that ADHD is a thing that psy­chi­a­trists made up so they can give chil­dren pow­er­ful stim­u­lants.

ADD is real, if you’re in the de­mo­graphic that takes this sur­vey there’s a de­cent enough chance you have it.

At­ten­tion Deficit Di­sor­der—Accessibility

So with that in mind, is there any­thing else we can do?

Yes, write bet­ter.

Scott Alexan­der has writ­ten a blog post with writ­ing ad­vice for non-fic­tion, and the in­ter­est­ing thing about it is just how much of the ad­vice is what I would tell you to do if your au­di­ence has ADD.

  • Re­ward the reader quickly and of­ten. If your prose isn’t re­ward­ing to read it won’t be read.

  • Make sure the over­all ar­ti­cle has good sec­tion­ing and in­dex­ing, peo­ple might be only look­ing for a par­tic­u­lar thing and they won’t want to wade through ev­ery­thing else to get it. Sec­tion­ing also gives the im­pres­sion of progress and re­duces eye strain.

  • Use good data vi­su­al­iza­tion to com­press in­for­ma­tion, take away men­tal effort where pos­si­ble. Take for ex­am­ple the con­di­tion table above. It saves space and pro­vides ad­di­tional con­text. In­stead of a long ver­ti­cal wall of text with sec­tions for each con­di­tion, it re­moves:

    • The ex­tra­ne­ous in­for­ma­tion of how many peo­ple said they did not have a con­di­tion.

    • The space that would be used by cre­at­ing a sec­tion for each con­di­tion. In fact the spe­cific im­prove­ment of the table is that it takes ex­tra ad­van­tage of space in the hori­zon­tal plane as well as the ver­ti­cal plane.

    And in­stead of just pre­sent­ing the raw data, it also adds:

    • The nor­mal rate of in­ci­dence for each con­di­tion, so that the reader un­der­stands the ex­tent to which rates are ab­nor­mal or un­ex­pected.

    • Easy com­par­i­son be­tween the clini­cally di­ag­nosed, self di­ag­nosed, and com­bined rates of the con­di­tion in the LW de­mo­graphic. This pre­serves the value of the origi­nal raw data pre­sen­ta­tion while also eas­ing the men­tal ar­ith­metic of how many peo­ple claim to have a con­di­tion.

    • Per­centage spread be­tween the clini­cally di­ag­nosed and the base rate, which saves the effort of figur­ing out the differ­ence be­tween the two val­ues.

    • Rel­a­tive risk be­tween the clini­cally di­ag­nosed and the base rate, which saves the effort of figur­ing out how much more or less likely a LessWronger is to have a given con­di­tion.

    Add all that to­gether and you’ve cre­ated a com­pel­ling pre­sen­ta­tion that sig­nifi­cantly im­proves on the ‘naive’ raw data pre­sen­ta­tion.

  • Use vi­su­als in gen­eral, they help draw and main­tain in­ter­est.

None of these are solely for the benefit of peo­ple with ADD. ADD is an ex­ag­ger­ated pro­file of nor­mal hu­man be­hav­ior. Fol­low­ing this kind of ad­vice makes your ar­ti­cle more ac­cessible to ev­ery­body, which should be more than enough in­cen­tive if you in­tend to have an au­di­ence.1

Roko’s Basilisk

This year we fi­nally added a Basilisk ques­tion! In fact, it kind of turned into a whole Basilisk sec­tion. A fairly com­mon ques­tion about this years sur­vey is why the Basilisk sec­tion is so large. The ba­sic rea­son is that ask­ing only one or two ques­tions about it would leave the re­sults open to ram­pant spec­u­la­tion in one di­rec­tion or an­other. By mak­ing the sec­tion com­pre­hen­sive and cov­er­ing ev­ery base, we’ve pretty much got­ten about as com­plete of data as we’d want on the Basilisk phe­nom­ena.

Basilisk Knowl­edge
Do you know what Roko’s Basilisk thought ex­per­i­ment is?

Yes: 1521 73.2%
No but I’ve heard of it: 158 7.6%
No: 398 19.2%

Basilisk Etiol­o­gy
Where did you read Roko’s ar­gu­ment for the Basilisk?

Roko’s post on LessWrong: 323 20.2%
Red­dit: 171 10.7%
XKCD: 61 3.8%
LessWrong Wiki: 234 14.6%
A news ar­ti­cle: 71 4.4%
Word of mouth: 222 13.9%
Ra­tion­alWiki: 314 19.6%
Other: 194 12.1%

Basilisk Cor­rect­ness
Do you think Roko’s ar­gu­ment for the Basilisk is cor­rect?

Yes: 75 5.1%
Yes but I don’t think it’s log­i­cal con­clu­sions ap­ply for other rea­sons: 339 23.1%
No: 1055 71.8%

Basilisks And Lizardmen

One of the biggest mis­takes I made with this years sur­vey was not in­clud­ing “Do you be­lieve Barack Obama is a hip­popota­mus?” as a con­trol ques­tion in this sec­tion.2 Five per­cent is just out­side of the in­fa­mous lizard­man con­stant. This was the biggest sur­vey sur­prise for me. I thought there was no way that ‘yes’ could go above a cou­ple of per­centage points. As far as I can tell this re­sult is not caused by brigad­ing but I’ve by no means in­ves­ti­gated the mat­ter so thor­oughly that I would rule it out.

Higher?

Of course, we also shouldn’t for­get to in­ves­ti­gate the hy­poth­e­sis that the num­ber might be higher than 5%. After all, some­body who thinks the Basilisk is cor­rect could skip the ques­tions en­tirely so they don’t face po­ten­tial stigma. So how many peo­ple skipped the ques­tions but filled out the rest of the sur­vey?

Eight peo­ple re­fused to an­swer whether they’d heard of Roko’s Basilisk but went on to an­swer the de­pres­sion ques­tion im­me­di­ately af­ter the Basilisk sec­tion. This gives us a de­cent proxy for how many peo­ple skipped the sec­tion and took the rest of the sur­vey. So if we’re pes­simistic the num­ber is a lit­tle higher, but it pays to keep in mind that there are other rea­sons to want to skip this sec­tion. (It is also pos­si­ble that peo­ple took the sur­vey up un­til they got to the Basilisk sec­tion and then quit so they didn’t have to an­swer it, but this seems un­likely.)

Of course this as­sumes peo­ple are be­ing strictly truth­ful with their sur­vey an­swers. It’s also plau­si­ble that peo­ple who think the Basilisk is cor­rect said they’d never heard of it and then went on with the rest of the sur­vey. So the num­ber could in the­ory be quite large. My hunch is that it’s not. I per­son­ally know quite a few LessWrongers and I’m fairly sure none of them would tell me that the Basilisk is ‘cor­rect’. (In fact I’m fairly sure they’d all be offended at me even ask­ing the ques­tion.) Since 5% is one in twenty I’d think I’d know at least one or two peo­ple who thought the Basilisk was cor­rect by now.

Lower?

One par­tial ex­pla­na­tion for the sur­pris­ingly high rate here is that ten per­cent of the peo­ple who said yes by their own ad­mis­sion didn’t know what they were say­ing yes to. Eight peo­ple said they’ve heard of the Basilisk but don’t know what it is, and that it’s cor­rect. The lizard­man con­stant also plau­si­bly ex­plains a sig­nifi­cant por­tion of the yes re­sponses, but that ex­pla­na­tion re­lies on you already hav­ing a prior be­lief that the rate should be low.


Basilisk-Like Danger
Do you think Basilisk-like thought ex­per­i­ments are dan­ger­ous?

Yes, I think they’re dan­ger­ous for de­ci­sion the­ory rea­sons: 63 4.2%
Yes I think they’re dan­ger­ous for so­cial rea­sons (eg. A cult might use them): 194 12.8%
Yes I think they’re dan­ger­ous for de­ci­sion the­ory and so­cial rea­sons: 136 9%
Yes I think they’re so­cially dan­ger­ous be­cause they make ev­ery­body in­volved look fool­ish: 253 16.7%
Yes I think they’re dan­ger­ous for other rea­sons: 54 3.6%
No: 809 53.4%

Most peo­ple don’t think Basilisk-Like thought ex­per­i­ments are dan­ger­ous at all. Of those that think they are, most of them think they’re so­cially dan­ger­ous as op­posed to a raw de­ci­sion the­ory threat. The 4.2% num­ber for pure de­ci­sion the­ory threat is in­ter­est­ing be­cause it lines up with the 5% num­ber in the pre­vi­ous ques­tion for Basilisk Cor­rect­ness.

P(De­ci­sion The­ory Danger | Basilisk Belief) = 26.6%
P(De­ci­sion The­ory And So­cial Danger | Basilisk Belief) = 21.3%

So of the peo­ple who say the Basilisk is cor­rect, only half of them be­lieve it is a de­ci­sion the­ory based dan­ger at all. (In the­ory this could be be­cause they be­lieve the Basilisk is a good thing and there­fore not dan­ger­ous, but I re­fuse to lose that much faith in hu­man­ity.3)

Basilisk Anx­ie­ty
Have you ever felt any sort of anx­iety about the Basilisk?

Yes: 142 8.8%
Yes but only be­cause I worry about ev­ery­thing: 189 11.8%
No: 1275 79.4%

20.6% of re­spon­dents have felt some kind of Basilisk Anx­iety. It should be noted that the ex­act word­ing of the ques­tion per­mits any anx­iety, even for a sec­ond. And as we’ll see in the next ques­tion that nu­ance is very im­por­tant.

De­gree Of Basilisk Wor­ry
What is the longest span of time you’ve spent wor­ry­ing about the Basilisk?

I haven’t: 714 47%
A few sec­onds: 237 15.6%
A minute: 298 19.6%
An hour: 176 11.6%
A day: 40 2.6%
Two days: 16 1.05%
Three days: 12 0.79%
A week: 12 0.79%
A month: 5 0.32%
One to three months: 2 0.13%
Three to six months: 0 0.0%
Six to nine months: 0 0.0%
Nine months to a year: 1 0.06%
Over a year: 1 0.06%
Years: 4 0.26%

Th­ese num­bers provide some pretty sober­ing con­text for the pre­vi­ous ones. Of all the peo­ple who wor­ried about the Basilisk, 93.8% didn’t worry about it for more than an hour. The next 3.65% didn’t worry about it for more than a day or two. The next 1.9% didn’t worry about it for more than a month and the last .7% or so have wor­ried about it for longer.

Cur­rent Basilisk Wor­ry
Are you cur­rently wor­ry­ing about the Basilisk?

Yes: 29 1.8%
Yes but only be­cause I worry about ev­ery­thing: 60 3.7%
No: 1522 94.5%

Also en­courag­ing. We should ex­pect a small num­ber of peo­ple to be wor­ried at this ques­tion just be­cause the sec­tion is ba­si­cally the word “Basilisk” and “worry” re­peated over and over so it’s prob­a­bly a bit scary to some peo­ple. But these num­bers are much lower than the “Have you ever wor­ried” ones and back up the pre­vi­ous in­fer­ence that Basilisk anx­iety is mostly a tran­si­tory phe­nom­ena.

One ar­ti­cle on the Basilisk asked the ques­tion of whether or not it was just a “refer­en­dum on autism”. It’s a good ques­tion and now I have an an­swer for you, as per the table be­low:

Men­tal Health Con­di­tions Ver­sus Basilisk Worry
Con­di­tion Wor­ried Wor­ried But They Worry About Every­thing Com­bined Worry
Baseline (in the re­spon­dent pop­u­la­tion) 8.8% 11.8% 20.6%
ASD 7.3% 17.3% 24.7%
OCD 10.0% 32.5% 42.5%
Anx­ie­tyDi­sor­der 6.9% 20.3% 27.3%
Schizophre­nia 0.0% 16.7% 16.7%

The short an­swer: Autism raises your chances of Basilisk anx­iety, but anx­iety di­s­or­ders and OCD es­pe­cially raise them much more. In­ter­est­ingly enough, schizophre­nia seems to bring the chances down. This might just be an effect of small sam­ple size, but my ex­pec­ta­tion was the op­po­site. (Peo­ple who are re­ally ob­sessed with Roko’s Basilisk seem to pre­sent with schizophrenic symp­toms at any rate.)

Be­fore we move on, there’s one last elephant in the room to con­tend with. The philo­soph­i­cal the­ory un­der­ly­ing the Basilisk is the CEV con­cep­tion of friendly AI pri­mar­ily es­poused by Eliezer Yud­kowsky. Which has led many crit­ics to spec­u­late on all kinds of re­la­tion­ships be­tween Eliezer Yud­kowsky and the Basilisk. Which of course ob­vi­ously would ex­tend to Eliezer Yud­kowsky’s Ma­chine In­tel­li­gence Re­search In­sti­tute, a pro­ject to de­velop ‘Friendly Ar­tifi­cial In­tel­li­gence’ which does not im­ple­ment a naive goal func­tion that eats ev­ery­thing else hu­mans ac­tu­ally care about once it’s given suffi­cient op­ti­miza­tion power.

The gen­eral thrust of these ac­cu­sa­tions is that MIRI, in­ten­tion­ally or not, prof­its from be­lief in the Basilisk. I think MIRI gets picked on enough, so I’m not thrilled about adding an­other log to the hefty pile of crit­i­cism they deal with. How­ever this is a se­ri­ous ac­cu­sa­tion which is plau­si­ble enough to be in the pub­lic in­ter­est for me to look at.

Per­centage Of Peo­ple Who Donate To MIRI Ver­sus Basilisk Belief
Belief Per­centage
Believe It’s In­cor­rect 5.2%
Believe It’s Struc­turally Cor­rect 5.6%
Believe It’s Cor­rect 12.0%

Basilisk be­lief does ap­pear to make you twice as likely to donate to MIRI. It’s im­por­tant to note from the per­spec­tive of ear­lier in­ves­ti­ga­tion that think­ing it is “struc­turally cor­rect” ap­pears to make you about as likely as if you don’t think it’s cor­rect, im­ply­ing that both of these op­tions mean about the same thing.

Sum Money Donated To MIRI Ver­sus Basilisk Belief
Belief Mean Me­dian Mode St­dev To­tal Donated
Believe It’s In­cor­rect 1365.590 100.0 100.0 4825.293 75107.5
Believe It’s Struc­turally Cor­rect 2644.736 110.0 20.0 9147.299 50250.0
Believe It’s Cor­rect 740.555 300.0 300.0 1152.541 6665.0

Take these num­bers with a grain of salt, it only takes one troll to plau­si­bly lie about their in­come to ruin it for ev­ery­body else.

In­ter­est­ingly enough, if you sum all three to­tal donated counts and di­vide by a hun­dred, you find that five per­cent of the sum is about what was donated by the Basilisk group. ($6601 to be ex­act) So even though the modal and me­dian dona­tions of Basilisk be­liev­ers are higher, they donate about as much as would be naively ex­pected by as­sum­ing dona­tions among groups are equal.4

Per­centage Of Peo­ple Who Donate To MIRI Ver­sus Basilisk Worry
Anx­iety Per­centage
Never Wor­ried 4.3%
Wor­ried But They Worry About Every­thing 11.1%
Wor­ried 11.3%

In con­trast to the cor­rect­ness ques­tion, merely hav­ing wor­ried about the Basilisk at any point in time dou­bles your chances of donat­ing to MIRI. My sus­pi­cion is that these peo­ple are not, as a gen­eral rule, donat­ing be­cause of the Basilisk per se. If you’re the sort of per­son who is even ca­pa­ble of wor­ry­ing about the Basilisk in prin­ci­ple, you’re prob­a­bly the kind of per­son who is likely to worry about AI risk in gen­eral and donate to MIRI on that ba­sis. This hy­poth­e­sis is prob­a­bly un­falsifi­able with the sur­vey in­for­ma­tion I have, be­cause Basilisk-risk is a sub­set of AI risk. This means that any­time some­body in­di­cates on the sur­vey that they’re wor­ried about AI risk this could be be­cause they’re wor­ried about the Basilisk or be­cause they’re wor­ried about more gen­eral AI risk.

Sum Money Donated To MIRI Ver­sus Basilisk Worry
Anx­iety Mean Me­dian Mode St­dev To­tal Donated
Never Wor­ried 1033.936 100.0 100.0 3493.373 56866.5
Wor­ried But They Worry About Every­thing 227.047 75.0 300.0 438.861 4768.0
Wor­ried 4539.25 90.0 10.0 11442.675 72628.0
Com­bined Worry 77396.0

Take these num­bers with a grain of salt, it only takes one troll to plau­si­bly lie about their in­come to ruin it for ev­ery­body else.

This par­tic­u­lar anal­y­sis is prob­a­bly the strongest ev­i­dence in the set for the hy­poth­e­sis that MIRI prof­its (though not nec­es­sar­ily through any in­volve­ment on their part) from the Basilisk. Peo­ple who wor­ried from an un­en­dorsed per­spec­tive donate less on av­er­age than ev­ery­body else. The modal dona­tion among peo­ple who’ve wor­ried about the Basilisk is ten dol­lars, which seems like a sure­fire way to tor­ture if we’re go­ing with the hy­poth­e­sis that these are peo­ple who be­lieve the Basilisk is a real thing and they’re con­cerned about it. So this im­plies that they don’t, which sup­ports my ear­lier hy­poth­e­sis that peo­ple who are ca­pa­ble of feel­ing anx­iety about the Basilisk are the core de­mo­graphic to donate to MIRI any­way.

Of course, donors don’t need to be­lieve in the Basilisk for MIRI to profit from it. If ex­pos­ing peo­ple to the con­cept of the Basilisk makes them twice as likely to donate but they don’t end up ac­tu­ally be­liev­ing the ar­gu­ment that would ar­guably be the ideal out­come for MIRI from an Evil Plot per­spec­tive. (Since af­ter all, pur­su­ing a strat­egy which in­volves Basilisk be­lief would ac­tu­ally in­cen­tivize tor­ture from the per­spec­tive of the acausal game the­o­ries MIRI bases its FAI on, which would be bad.)

But frankly this is veer­ing into very spec­u­la­tive ter­ri­tory. I don’t think there’s an evil plot, nor am I con­vinced that MIRI is prof­it­ing from Basilisk be­lief in a way that out­weighs the re­sult­ing lost dona­tions and dam­age to their cause.5 If any­body would like to as­sert oth­er­wise I in­vite them to ‘put up or shut up’ with hard ev­i­dence. The world has enough crit­i­cism based on idle spec­u­la­tion and you’re pee­ing in the pool.

Blogs and Media

Since this was the LessWrong di­as­pora sur­vey, I felt it would be in or­der to reach out a bit to ask not just where the com­mu­nity is at but what it’s read­ing. I went around to var­i­ous peo­ple I knew and asked them about blogs for this sec­tion. How­ever the picks were largely based on my men­tal ‘map’ of the blogs that are com­monly read/​linked in the com­mu­nity with a hand­ful of sug­ges­tions thrown in. The same method was used for sto­ries.

Blogs Read

LessWrong
Reg­u­lar Reader: 239 13.4%
Some­times: 642 36.1%
Rarely: 537 30.2%
Al­most Never: 272 15.3%
Never: 70 3.9%
Never Heard Of It: 14 0.7%

SlateS­tarCodex (Scott Alexan­der)
Reg­u­lar Reader: 1137 63.7%
Some­times: 264 14.7%
Rarely: 90 5%
Al­most Never: 61 3.4%
Never: 51 2.8%
Never Heard Of It: 181 10.1%

[Th­ese two re­sults to­gether pretty much con­firm the re­sults I talked about in part two of the sur­vey anal­y­sis. A su­per­ma­jor­ity of re­spon­dents are ‘reg­u­lar read­ers’ of SlateS­tarCodex. By con­trast LessWrong it­self doesn’t even have a quar­ter of SlateS­tarCodexes read­er­ship.]

Over­com­ing Bias (Robin Han­son)
Reg­u­lar Reader: 206 11.751%
Some­times: 365 20.821%
Rarely: 391 22.305%
Al­most Never: 385 21.962%
Never: 239 13.634%
Never Heard Of It: 167 9.527%

Mind­ing Our Way (Nate Soares)
Reg­u­lar Reader: 151 8.718%
Some­times: 134 7.737%
Rarely: 139 8.025%
Al­most Never: 175 10.104%
Never: 214 12.356%
Never Heard Of It: 919 53.06%

Agenty Duck (Brienne Yud­kowsky)
Reg­u­lar Reader: 55 3.181%
Some­times: 132 7.634%
Rarely: 144 8.329%
Al­most Never: 213 12.319%
Never: 254 14.691%
Never Heard Of It: 931 53.846%

Eliezer Yud­kowsky’s Face­book Page
Reg­u­lar Reader: 325 18.561%
Some­times: 316 18.047%
Rarely: 231 13.192%
Al­most Never: 267 15.248%
Never: 361 20.617%
Never Heard Of It: 251 14.335%

Luke Muehlhauser (Epony­mous)
Reg­u­lar Reader: 59 3.426%
Some­times: 106 6.156%
Rarely: 179 10.395%
Al­most Never: 231 13.415%
Never: 312 18.118%
Never Heard Of It: 835 48.49%

Gw­ern.net (Gw­ern Bran­wen)
Reg­u­lar Reader: 118 6.782%
Some­times: 281 16.149%
Rarely: 292 16.782%
Al­most Never: 224 12.874%
Never: 230 13.218%
Never Heard Of It: 595 34.195%

Siderea (Sibylla Bos­tonien­sis)
Reg­u­lar Reader: 29 1.682%
Some­times: 49 2.842%
Rarely: 59 3.422%
Al­most Never: 104 6.032%
Never: 183 10.615%
Never Heard Of It: 1300 75.406%

Rib­bon Farm (Venkatesh Rao)
Reg­u­lar Reader: 64 3.734%
Some­times: 123 7.176%
Rarely: 111 6.476%
Al­most Never: 150 8.751%
Never: 150 8.751%
Never Heard Of It: 1116 65.111%

Bayesed And Con­fused (Michael Ru­pert)
Reg­u­lar Reader: 2 0.117%
Some­times: 10 0.587%
Rarely: 24 1.408%
Al­most Never: 68 3.988%
Never: 167 9.795%
Never Heard Of It: 1434 84.106%

[This was the ‘troll’ an­swer to catch out peo­ple who claim to read ev­ery­thing.]

The Unit Of Car­ing (Anony­mous)
Reg­u­lar Reader: 281 16.452%
Some­times: 132 7.728%
Rarely: 126 7.377%
Al­most Never: 178 10.422%
Never: 216 12.646%
Never Heard Of It: 775 45.375%

GiveWell Blog (Mul­ti­ple Authors)
Reg­u­lar Reader: 75 4.438%
Some­times: 197 11.657%
Rarely: 243 14.379%
Al­most Never: 280 16.568%
Never: 412 24.379%
Never Heard Of It: 482 28.521%

Thing Of Things (Ozy Frantz)
Reg­u­lar Reader: 363 21.166%
Some­times: 201 11.72%
Rarely: 143 8.338%
Al­most Never: 171 9.971%
Never: 176 10.262%
Never Heard Of It: 661 38.542%

The Last Psy­chi­a­trist (Anony­mous)
Reg­u­lar Reader: 103 6.023%
Some­times: 94 5.497%
Rarely: 164 9.591%
Al­most Never: 221 12.924%
Never: 302 17.661%
Never Heard Of It: 826 48.304%

Ho­tel Concierge (Anony­mous)
Reg­u­lar Reader: 29 1.711%
Some­times: 35 2.065%
Rarely: 49 2.891%
Al­most Never: 88 5.192%
Never: 179 10.56%
Never Heard Of It: 1315 77.581%

The View From Hell (Sister Y)
Reg­u­lar Reader: 34 1.998%
Some­times: 39 2.291%
Rarely: 75 4.407%
Al­most Never: 137 8.049%
Never: 250 14.689%
Never Heard Of It: 1167 68.566%

Xenosys­tems (Nick Land)
Reg­u­lar Reader: 51 3.012%
Some­times: 32 1.89%
Rarely: 64 3.78%
Al­most Never: 175 10.337%
Never: 364 21.5%
Never Heard Of It: 1007 59.48%

I tried my best to have rep­re­sen­ta­tion from mul­ti­ple sec­tions of the di­as­pora, if you look at the differ­ent blogs you can prob­a­bly guess which blogs rep­re­sent which sec­tion.

Sto­ries Read

Harry Pot­ter And The Meth­ods Of Ra­tion­al­ity (Eliezer Yud­kowsky)
Whole Thing: 1103 61.931%
Par­tially And In­tend To Finish: 145 8.141%
Par­tially And Aban­doned: 231 12.97%
Never: 221 12.409%
Never Heard Of It: 81 4.548%

Sig­nifi­cant Digits (Alexan­der D)
Whole Thing: 123 7.114%
Par­tially And In­tend To Finish: 105 6.073%
Par­tially And Aban­doned: 91 5.263%
Never: 333 19.26%
Never Heard Of It: 1077 62.29%

Three Wor­lds Col­lide (Eliezer Yud­kowsky)
Whole Thing: 889 51.239%
Par­tially And In­tend To Finish: 35 2.017%
Par­tially And Aban­doned: 36 2.075%
Never: 286 16.484%
Never Heard Of It: 489 28.184%

The Fable of the Dragon-Tyrant (Nick Bostrom)
Whole Thing: 728 41.935%
Par­tially And In­tend To Finish: 31 1.786%
Par­tially And Aban­doned: 15 0.864%
Never: 205 11.809%
Never Heard Of It: 757 43.606%

The World of Null-A (A. E. van Vogt)
Whole Thing: 92 5.34%
Par­tially And In­tend To Finish: 18 1.045%
Par­tially And Aban­doned: 25 1.451%
Never: 429 24.898%
Never Heard Of It: 1159 67.266%

[Wow, I never would have ex­pected this many peo­ple to have read this. I mostly in­cluded it on a lark be­cause of its his­tor­i­cal sig­nifi­cance.]

Syn­the­sis (Sharon Mitchell)
Whole Thing: 6 0.353%
Par­tially And In­tend To Finish: 2 0.118%
Par­tially And Aban­doned: 8 0.47%
Never: 217 12.75%
Never Heard Of It: 1469 86.31%

[This was the ‘troll’ op­tion to catch peo­ple who just say they’ve read ev­ery­thing.]

Worm (Wild­bow)
Whole Thing: 501 28.843%
Par­tially And In­tend To Finish: 168 9.672%
Par­tially And Aban­doned: 184 10.593%
Never: 430 24.755%
Never Heard Of It: 454 26.137%

Pact (Wild­bow)
Whole Thing: 138 7.991%
Par­tially And In­tend To Finish: 59 3.416%
Par­tially And Aban­doned: 148 8.57%
Never: 501 29.01%
Never Heard Of It: 881 51.013%

Twig (Wild­bow)
Whole Thing: 55 3.192%
Par­tially And In­tend To Finish: 132 7.661%
Par­tially And Aban­doned: 65 3.772%
Never: 560 32.501%
Never Heard Of It: 911 52.873%

Ra (Sam Hughes)
Whole Thing: 269 15.558%
Par­tially And In­tend To Finish: 80 4.627%
Par­tially And Aban­doned: 95 5.495%
Never: 314 18.161%
Never Heard Of It: 971 56.16%

My Lit­tle Pony: Friend­ship Is Op­ti­mal (Ice­man)
Whole Thing: 424 24.495%
Par­tially And In­tend To Finish: 16 0.924%
Par­tially And Aban­doned: 65 3.755%
Never: 559 32.293%
Never Heard Of It: 667 38.533%

Friend­ship Is Op­ti­mal: Caelum Est Con­ter­rens (Cha­toy­ance)
Whole Thing: 217 12.705%
Par­tially And In­tend To Finish: 16 0.937%
Par­tially And Aban­doned: 24 1.405%
Never: 411 24.063%
Never Heard Of It: 1040 60.89%

En­der’s Game (Or­son Scott Card)
Whole Thing: 1177 67.219%
Par­tially And In­tend To Finish: 22 1.256%
Par­tially And Aban­doned: 43 2.456%
Never: 395 22.559%
Never Heard Of It: 114 6.511%

[This is the most read story ac­cord­ing to sur­vey re­spon­dents, beat­ing HPMOR by 5%.]

The Di­a­mond Age (Neal Stephen­son)
Whole Thing: 440 25.346%
Par­tially And In­tend To Finish: 37 2.131%
Par­tially And Aban­doned: 55 3.168%
Never: 577 33.237%
Never Heard Of It: 627 36.118%

Con­sider Ph­le­bas (Iain Banks)
Whole Thing: 302 17.507%
Par­tially And In­tend To Finish: 52 3.014%
Par­tially And Aban­doned: 47 2.725%
Never: 439 25.449%
Never Heard Of It: 885 51.304%

The Me­ta­mor­pho­sis Of Prime In­tel­lect (Roger Willi­ams)
Whole Thing: 226 13.232%
Par­tially And In­tend To Finish: 10 0.585%
Par­tially And Aban­doned: 24 1.405%
Never: 322 18.852%
Never Heard Of It: 1126 65.925%

Ac­celerando (Charles Stross)
Whole Thing: 293 17.045%
Par­tially And In­tend To Finish: 46 2.676%
Par­tially And Aban­doned: 66 3.839%
Never: 425 24.724%
Never Heard Of It: 889 51.716%

A Fire Upon The Deep (Ver­nor Vinge)
Whole Thing: 343 19.769%
Par­tially And In­tend To Finish: 31 1.787%
Par­tially And Aban­doned: 41 2.363%
Never: 508 29.28%
Never Heard Of It: 812 46.801%

I also did a k-means cluster anal­y­sis of the data to try and de­ter­mine de­mo­graph­ics and the ul­ti­mate con­clu­sion I drew from it is that I need to do more anal­y­sis. Which I would do, ex­cept that the ini­tial anal­y­sis was a whole bunch of work and jump­ing fur­ther down the rab­bit hole in the hopes I reach an oa­sis prob­a­bly isn’t in the best in­ter­ests of my­self or my read­ers.

Footnotes


  1. This is a gen­eral trend I no­tice with ac­cessibil­ity. Not always, but very of­ten mea­sures taken to help a spe­cific group end up hav­ing pos­i­tive effects for oth­ers as well. Many of the ac­cessibil­ity sug­ges­tions of the W3C are things you wish ev­ery web­site did.

  2. I hadn’t read this par­tic­u­lar SSC post at the time I com­piled the sur­vey, but I was already fa­mil­iar with the con­cept of a lizard­man con­stant and should have ac­counted for it.

  3. I’ve been in­formed by a mem­ber of the freen­ode #less­wrong IRC chan­nel that this is in fact Roko’s opinion, be­cause you can ‘time­lessly trade with the fu­ture su­per­in­tel­li­gence for re­wards, not just pun­ish­ment’ ac­cord­ing to a con­ver­sa­tion they had with him last sum­mer. Re­mem­ber kids: Don’t do drugs, in­clud­ing Max Teg­mark.

  4. You might think that this con­flicts with the hy­poth­e­sis that the true rate of Basilisk be­lief is lower than 5%. It does a bit, but you also need to re­mem­ber that these peo­ple are in the LessWrong de­mo­graphic, which means re­gard­less of what the Basilisk be­lief ques­tion means we should naively ex­pect them to donate five per­cent of the MIRI dona­tion pot.

  5. That is to say, it does seem plau­si­ble that MIRI ‘prof­its’ from Basilisk be­lief based on this data, but I’m fairly sure any profit is out­weighed by the sig­nifi­cant op­por­tu­nity cost as­so­ci­ated with it. I should also take this mo­ment to re­mind the reader that the origi­nal Basilisk ar­gu­ment was sup­posed to prove that CEV is a flawed con­cept from the per­spec­tive of not hav­ing dele­te­ri­ous out­comes for peo­ple, so MIRI us­ing it as a way to jus­tify donat­ing to them would be weird.