Survey Results: 10 Fun Questions for LWers

I posted a quick 10-question survey, and 100 users filled it out. Here are the results!

1. Pick the answer most people will not pick.

  • HPMOR (26%)

  • SlateStarCodex (26%)

  • The Sequences (30%)

  • A dust speck in the eye (18%)

Well done everyone who picked the dust speck! I myself won this, and am pleased.

My reasoning was that it was the most distinctive in type (the rest were all ‘big things people read’) and so would be considered obvious, thus rendering it non-obvious. I now overconfidently believe LWers operate at level 2, and I can always win by playing level 3. (I will test this some time again in the future.)

My housemates point out that we should all have rolled a 4-sided die and pick that option, with some chance for 100% of us to win if we got a perfect 25% on each of them. So now I feel a little bit sad, because I sure didn’t think of that.

2. To the nearest 10, at least how much karma makes a post worth opening?

Plus one selection of “200+” not shown above.

The median answer was 20, and the mean was 28, the st dev was 25, all to the nearest whole number.

3. How much do you think most LWers believe in the thesis of Civilizational Inadequacy?

The average was 6.08, st dev was 1.4 (2.s.f).

Don’t have much to say here.

4. How much do you believe in the thesis of civilizational inadequacy?

Average: 6.13. St dev: 1.7 (2.s.f).

On average, we had good self-models. Will be interesting to see how people’s accuracy here correlates with the other questions.

5. Here are 10 terms that have been used on LW. Click the terms you have used in conversation (at least, say, 10 times) because of LessWrong (and other rationalist blogs).

Here they are, in order of how many clicks they got:

  • Existential Risk (64)

  • Coordination Problem (61)

  • Bayesian Update (58)

  • Common Knowledge (53)

  • Counterfactual (51)

  • Goodhart (47)

  • Slack (38)

  • Legibility (in the sense of James C. Scott’s book “Seeing Like a State”) (31)

  • Asymmetric tools /​ asymmetric weapons (28)

  • Babble (or Prune) (17)

On average, people had used 44.8% of these terms in conversation (at least 10 times). Which is… higher than I’d have predicted.

6. How much slack do you have in your life?

Average: 5.36. St. Dev: 2.4 (2.s.f).

I’d been quite worried it would skew hard in the low direction, but it seems like there’s a fair number of people here who are kind of doing okay. Perhaps everyone has more slack due to covid? But it’s weirdly bimodal, and I didn’t have a theory that predicted that.

7. How many dollars would you pay for a copy of a professionally edited and designed book of the best essays as voted in the 2018 LessWrong Review? (including shipping costs)

Average: $12.30. Median: $10.

Well, that’s good to know. If we want it to sell at more than that, we need to make it more attractive for y’all...

8. How happy are you with the work of the LessWrong Team?

Average: 7.21. St. Dev: 1.6.

The text on either end was about whether the LW team has been strongly ‘net negative’ or ‘net positive’ in its impact on the site.

Overall, that’s 79% of people giving 7-9. 17% are ambivalent (5-6), and 4% think net negative. So overall seems pretty good to me. Will ask more pointed questions in the future, but was good to see the sign overall being quite positive.

9. When you feel emotions, do they mostly help or hinder you in pursuing your goals?

Average: 5.6. St dev: 2.1.

Interesting. Of note, if you’re in this set, then going to a CFAR workshop would increase your answer to this question on average by 0.84, given the data from their longitudinal study (data that I discussed here). That’s if you haven’t already been, of course.

10. In a sentence or two, what’s LessWrong’s biggest problem? (optional)

This one was fun. Some interesting ones:

  • Nobody knows how to write short, concise posts, including me.

  • My high-effort posts don’t get enough upvotes.

  • Play, humor, and levity seem kind of underutilized compared to the Sequence days, and that makes me sad.

  • Level of some AI posts is intimidating

  • It needs better real time interactive tools for debate. E.g. you could attribute karma for just a section of the post, not the whole post, and comment and expand on sections of the post (using maybe tip-boxes?) while reading the post.

  • There’s no link to r/​rational.

  • Discussion norms stifle pursuit of truth (too much focus on “prosocial” /​ “polite”) etc., people’s feelings, etc.

  • Hasn’t cracked the central problem in getting generative pairings of people together rather than chains of criticism and responses.

I also resonated a bit with whoever answered “lying motherfuckers who practice ‘instrumental rationality’ instead of telling the goddamned truth”. Although I think on net we’re doing a good job with this on LW.

Over half the respondents said something. Here’s a spreadsheet with the full responses.

Final Thoughts

I looked for interesting correlations, and checked 15 of them.

  • Most things below 0.1, one or two nearing 0.2, which I discarded.

  • There was a strong correlation between what people believed about CivIn and what people believed about the community, a correlation of 0.62. It’s basically a measure of the strength of the typical mind fallacy around here.

  • I find myself a bit confused about how to calculate the bayesian truth serum correctly regarding civilizational inadequacy. I’m not sure how to calculate something that isn’t just the 0.62 number above. Here’s the whole data set. Can someone help? If you figure it out I’ll give you a strong upvote and add it to the post.

Thank you all for answering! Anyone got any other question ideas?