Trialing for the machine learning living library position at MIRI and occasional volunteer instructor and mentor at CFAR.
Qiaochu_Yuan
Privileging the Question
Boring Advice Repository
Paper Trauma
Against utility functions
Rationalist Lent
Most people go through life using cultural memes that they soak up from their environment. These cultural memes have had lots of selective pressure acting on them, so most of the time they won’t be obviously harmful: for example, most cultures don’t have memes advocating that you stick your hand in fires. Following these cultural memes is a low-variance strategy: you might not become overwhelmingly successful this way, but you’ll also avoid many failure modes.
A basic aspect of LW-style rationality involves questioning and rethinking everything, including these cultural memes. As such, it’s a high-variance strategy: you might end up with new memes that are much better or much worse than standard memes. This might be okay if you’re quite good at questioning and rethinking things, but if you aren’t (and even if you are!), you might afflict yourself with a memetic immune disorder and head towards all sorts of failure modes as a result (joining a cult being the sort of stereotypical thing).
I think most people will be averse to LW-style rationality as part of a general aversion to things that seem too weird, and I think this is probably overall a reasonable aversion for most people to have, as it helps them avoid many failure modes.
First, I want to echo that I’m extremely grateful for this writeup, and also for your hard work on a plausibly important project.
I was one of the people approached in 2016 to write math content. I said I’d think about it but never ended up writing any (aside from, IIRC, a small handful of minor edits to existing pages), and I don’t remember if I gave a detailed explanation of why I didn’t feel excited about writing content on Arbital, so for what it’s worth, here are some extremely belated thoughts about that. I want to contrast Arbital with Math.StackExchange in particular, where writing content is if anything too easy and addictive for me.
First and maybe most importantly, answering questions on math.SE involves a fast and satisfying social exchange. A person asks a question, I answer it, and then I get various social rewards, namely upvotes or comments, which are often of the form “thanks for this clear explanation!” or similar. It’s easy to get a sense that I’m helping people, and it’s nice that I get clear social credit for providing that help. The fact that I’m answering a question also means I don’t have to pick a topic to write about (this is part of what’s preventing me from writing top-level LW 2.0 posts), and I can also tailor my explanation to what the questioner seems most confused about. I got the impression (I don’t remember how accurate this is) that writing an Arbital explanation would be too similar to writing a Wikipedia article, which I’ve never been excited about: I don’t get social credit for helping, I don’t know who is being helped, I have to pick the topic, and I don’t know who to tailor my explanation to.
Looking back, I was unsatisfied with the whole concept of collaboratively writing a long modular sequence of explanations. There were roughly two ways this could go and I disliked both of them for different reasons.
Way #1 was that I’d mostly write a few pieces of such a sequence; I disliked this because 1) I didn’t want the comprehensibility of my explanations to depend on the comprehensibility of other explanations I hadn’t vetted, and 2) I didn’t want to have to fit into a particular narrative or frame from other explanations if I thought I had a better one.
Way #2 was that I’d mostly write such a sequence myself; I disliked this because 1) it takes cognitive effort to hold the first N pieces of a long explanation in working memory when modeling a reader reading the (N+1)st explanation and I wasn’t willing to do this casually, and 2) I didn’t like the idea of writing something this long for an abstract audience as opposed to a particular person or people because I didn’t feel like I had enough to go on as far as modeling where the audience is likely to be confused, etc. Having to model a variety of possible readers was also cognitively effortful and I wasn’t willing to do that casually either. The experience would have felt noticeably different for me if I was asked to model a specific set of readers, e.g. “please write an explanation of logarithms for Alice, then for Bob, then for Charlie”; then it would have felt more like answering a sequence of related math.SE questions.
To the extent that I didn’t explain this to Eric when he asked me in 2016 I can only plead that in 2016 I was less good than I am now at noticing and articulating ways in which I’m unsatisfied or annoyed by something; also I was to some extent responding to mild perceived social pressure to be enthusiastic about the project.
Thoughts on the January CFAR workshop
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don’t have beliefs in the sense that LW uses the word. People just say words, mostly words that they’ve heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it’s a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don’t ask “what do these people believe?” but “what do these people do?” The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
What resources have increasing marginal utility?
My summary / take: believing arguments if you’re below a certain level of rationality makes you susceptible to bad epistemic luck. Status quo bias inoculates you against this. This seems closely related to Reason as memetic immune disorder.
The Math Learning Experiment
The January 2013 CFAR workshop: one-year retrospective
It is fairly terrifying that the term “evidence-based medicine” exists because that implies that there are other kinds.
Things that are your fault are good because they can be fixed. If they’re someone else’s fault, you have to fix them, and that’s much harder.
-- Geoff Anders (paraphrased)
- 2 Jul 2013 2:55 UTC; 28 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 by (
Obtain a smartphone. It will make your life better. (If you don’t have one because you feel like they’re overhyped, remember that reversed stupidity is not intelligence.) Here is a list of things I use my smartphone to do, in no particular order:
Record things I want my future selves to do in RTM on the go
Record sleep data using Sleep Cycle
Take notes on conversations using either voice memos or Evernote
Record various kinds of things in Workflowy, e.g. exercise data
Respond more quickly to emails (people I know have debated the value of doing this, but I get really annoyed when other people take a long time to respond to my emails and don’t want to do that)
Receive calendar alerts, alarms, and Boomerangs from my past selves that remind me to do things
Look things up, e.g. on Wikipedia, on the go (e.g. when I am waiting in line for something)
Read academic papers on the go
Search my email for important information on the go, e.g. the location of some event or an ID number of some kind
Look up directions on the go, e.g. to the location of some event
Look up places on Yelp on the go
Look up prices and reviews of an item I’m considering buying IRL on Amazon
There is a possibility of wasting large amounts of time playing games which I curtailed early on by refusing to download games except during breaks from school.
- 8 Mar 2013 21:24 UTC; 20 points) 's comment on Boring Advice Repository by (
- 11 Jan 2015 21:02 UTC; 2 points) 's comment on 2015 Repository Reruns—Boring Advice Repository by (
- 20 Apr 2013 22:27 UTC; 0 points) 's comment on Less Wrong Product & Service Recommendations by (
Too insightful! Not boring enough!
Dude, suckin’ at something is the first step to being sorta good at something.
-- Jake the Dog (Adventure Time)
I mildly dislike this idea because it seems to promote an argument-as-soldiers mentality.
-- Scott Aaronson