Trialing for the machine learning living library position at MIRI and occasional volunteer instructor and mentor at CFAR.
Qiaochu_Yuan
Most people go through life using cultural memes that they soak up from their environment. These cultural memes have had lots of selective pressure acting on them, so most of the time they won’t be obviously harmful: for example, most cultures don’t have memes advocating that you stick your hand in fires. Following these cultural memes is a low-variance strategy: you might not become overwhelmingly successful this way, but you’ll also avoid many failure modes.
A basic aspect of LW-style rationality involves questioning and rethinking everything, including these cultural memes. As such, it’s a high-variance strategy: you might end up with new memes that are much better or much worse than standard memes. This might be okay if you’re quite good at questioning and rethinking things, but if you aren’t (and even if you are!), you might afflict yourself with a memetic immune disorder and head towards all sorts of failure modes as a result (joining a cult being the sort of stereotypical thing).
I think most people will be averse to LW-style rationality as part of a general aversion to things that seem too weird, and I think this is probably overall a reasonable aversion for most people to have, as it helps them avoid many failure modes.
First, I want to echo that I’m extremely grateful for this writeup, and also for your hard work on a plausibly important project.
I was one of the people approached in 2016 to write math content. I said I’d think about it but never ended up writing any (aside from, IIRC, a small handful of minor edits to existing pages), and I don’t remember if I gave a detailed explanation of why I didn’t feel excited about writing content on Arbital, so for what it’s worth, here are some extremely belated thoughts about that. I want to contrast Arbital with Math.StackExchange in particular, where writing content is if anything too easy and addictive for me.
First and maybe most importantly, answering questions on math.SE involves a fast and satisfying social exchange. A person asks a question, I answer it, and then I get various social rewards, namely upvotes or comments, which are often of the form “thanks for this clear explanation!” or similar. It’s easy to get a sense that I’m helping people, and it’s nice that I get clear social credit for providing that help. The fact that I’m answering a question also means I don’t have to pick a topic to write about (this is part of what’s preventing me from writing top-level LW 2.0 posts), and I can also tailor my explanation to what the questioner seems most confused about. I got the impression (I don’t remember how accurate this is) that writing an Arbital explanation would be too similar to writing a Wikipedia article, which I’ve never been excited about: I don’t get social credit for helping, I don’t know who is being helped, I have to pick the topic, and I don’t know who to tailor my explanation to.
Looking back, I was unsatisfied with the whole concept of collaboratively writing a long modular sequence of explanations. There were roughly two ways this could go and I disliked both of them for different reasons.
Way #1 was that I’d mostly write a few pieces of such a sequence; I disliked this because 1) I didn’t want the comprehensibility of my explanations to depend on the comprehensibility of other explanations I hadn’t vetted, and 2) I didn’t want to have to fit into a particular narrative or frame from other explanations if I thought I had a better one.
Way #2 was that I’d mostly write such a sequence myself; I disliked this because 1) it takes cognitive effort to hold the first N pieces of a long explanation in working memory when modeling a reader reading the (N+1)st explanation and I wasn’t willing to do this casually, and 2) I didn’t like the idea of writing something this long for an abstract audience as opposed to a particular person or people because I didn’t feel like I had enough to go on as far as modeling where the audience is likely to be confused, etc. Having to model a variety of possible readers was also cognitively effortful and I wasn’t willing to do that casually either. The experience would have felt noticeably different for me if I was asked to model a specific set of readers, e.g. “please write an explanation of logarithms for Alice, then for Bob, then for Charlie”; then it would have felt more like answering a sequence of related math.SE questions.
To the extent that I didn’t explain this to Eric when he asked me in 2016 I can only plead that in 2016 I was less good than I am now at noticing and articulating ways in which I’m unsatisfied or annoyed by something; also I was to some extent responding to mild perceived social pressure to be enthusiastic about the project.
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don’t have beliefs in the sense that LW uses the word. People just say words, mostly words that they’ve heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it’s a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don’t ask “what do these people believe?” but “what do these people do?” The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
My summary / take: believing arguments if you’re below a certain level of rationality makes you susceptible to bad epistemic luck. Status quo bias inoculates you against this. This seems closely related to Reason as memetic immune disorder.
It is fairly terrifying that the term “evidence-based medicine” exists because that implies that there are other kinds.
Things that are your fault are good because they can be fixed. If they’re someone else’s fault, you have to fix them, and that’s much harder.
-- Geoff Anders (paraphrased)
- 2 Jul 2013 2:55 UTC; 28 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 by (
Obtain a smartphone. It will make your life better. (If you don’t have one because you feel like they’re overhyped, remember that reversed stupidity is not intelligence.) Here is a list of things I use my smartphone to do, in no particular order:
Record things I want my future selves to do in RTM on the go
Record sleep data using Sleep Cycle
Take notes on conversations using either voice memos or Evernote
Record various kinds of things in Workflowy, e.g. exercise data
Respond more quickly to emails (people I know have debated the value of doing this, but I get really annoyed when other people take a long time to respond to my emails and don’t want to do that)
Receive calendar alerts, alarms, and Boomerangs from my past selves that remind me to do things
Look things up, e.g. on Wikipedia, on the go (e.g. when I am waiting in line for something)
Read academic papers on the go
Search my email for important information on the go, e.g. the location of some event or an ID number of some kind
Look up directions on the go, e.g. to the location of some event
Look up places on Yelp on the go
Look up prices and reviews of an item I’m considering buying IRL on Amazon
There is a possibility of wasting large amounts of time playing games which I curtailed early on by refusing to download games except during breaks from school.
- 8 Mar 2013 21:24 UTC; 20 points) 's comment on Boring Advice Repository by (
- 11 Jan 2015 21:02 UTC; 2 points) 's comment on 2015 Repository Reruns—Boring Advice Repository by (
- 20 Apr 2013 22:27 UTC; 0 points) 's comment on Less Wrong Product & Service Recommendations by (
Too insightful! Not boring enough!
Dude, suckin’ at something is the first step to being sorta good at something.
-- Jake the Dog (Adventure Time)
I mildly dislike this idea because it seems to promote an argument-as-soldiers mentality.
Generally agree that this is important to keep in mind, but:
Asking for a number instead of offering yours. If I want to call you, I will, but when you ask for my number, I can’t stop you calling or harassing me in the future.
It’s possible my model is just mistaken here, but my understanding is that people generally expect (straight) men to ask for numbers and (straight) women to offer numbers, and deviating from this script on the male side is low-status. Something like “I can’t be bothered to take the next step here, so you do it.” Or maybe “I’m not confident enough to ask for your number, so I’ll give you mine instead and hope for the best.” Agree with the other commenters that offering fake numbers is an option.
Thanks for the detailed update! Donated $1,500.
Virtue ethics might be reframed for the LW audience as “habit ethics”: it’s the notion of ethics appropriate for a mind that precomputes its behavior in most situations based on its own past behavior. (Deontology might be reframable as “Schelling point ethics” or something.)
+A whole bunch for this. In general I think we need way more posts of the form “here is an actual thing in the world I tried and here is what I learned from it” (the Arbital post is another really good example) and I want to somehow incentivize them much harder but I’m not sure how.
Agreed. In general, I think a lot of the discussion of ethics on LW conflates ethics-for-AI with ethics-for-humans, which are two very different subjects and which should be approached very differently (e.g. I think virtue ethics is great for humans but I don’t even know what it would mean to make an AI a virtue ethicist).
The LW memeplex may be somewhat too ready to buy into the hypothesis that a given group of people is insane. People do generally respond to incentives, and situations where there are large incentives that people aren’t responding to are probably worth an explanation more descriptive than generic insanity.
Given what I understand to be the dominant stereotypes about American cars, though, I do think it’s plausible that American car manufacturers are insane. I don’t know about others.
Thanks for writing this! I am very excited that this post exists. I think what this model suggests about procrastination and addiction alone (namely, that they’re things that managers and firefighters are doing to protect exiles) are already huge, and resonate strongly with my experience.
In the beginning of 2018 I experienced a dramatic shift that I still don’t quite understand; my sense of it at the time was that there was this crippling fear / shame that had been preventing me from doing almost anything, that suddenly lifted (for several reasons, it’s a long story). That had many dramatic effects, and one of the most noticeable ones was that I almost completely stopped wanting to watch TV, read manga, play video games, or any of my other addiction / procrastination behaviors. It became very clear that the purpose of all of those behaviors was numbing and distraction (“general purpose feeling obliterators” used by firefighters, as waveman says in another comment) from how shitty I felt all the time, and after the shift I basically felt so good that I didn’t want or need to do that anymore.
(This lasted for awhile but not forever; I crashed hard in September (long story again) before experiencing a very similar shift again a few weeks ago.)
Another closely related effect is that many things that had been too scary for me to think about became thinkable (e.g. regrettable dynamics in my romantic relationships), and I think this is a crucial observation for the rationality project. When you have exile-manager-firefighter dynamics going on and you don’t know how to unblend from them, you cannot think clearly about anything that triggers the exile, and trying to make yourself do it anyway will generate tremendous internal resistance in one form or another (getting angry, getting bored, getting sleepy, getting confused, all sorts of crap), first from managers trying to block the thoughts and then from firefighters trying to distract you from the thoughts. Top priority is noticing that this is happening and then attending to the underlying emotional dynamics.
- On Internal Family Systems and multi-agent minds: a reply to PJ Eby by 29 Oct 2019 14:56 UTC; 40 points) (
- 27 Feb 2019 9:30 UTC; 19 points) 's comment on Informal Post on Motivation by (
In Japan, it is widely believed that you don’t have direct knowledge of what other people are really thinking (and it’s very presumptuous to assume otherwise), and so it is uncommon to describe other people’s thoughts directly, such as “He likes ice cream” or “She’s angry”. Instead, it’s far more common to see things like “I heard that he likes ice cream” or “It seems like/It appears to be the case that she is angry” or “She is showing signs of wanting to go to the park.”
-- TVTropes
Edit (1/7): I have no particular reason to believe that this is literally true, but either way I think it holds an interesting rationality lesson. Feel free to substitute ‘Zorblaxia’ for ‘Japan’ above.
It’s not American slang; it’s internet slang, I guess? (The following is an explanation for anyone who both reads MoR and these discussion threads but isn’t familiar with fanfiction in general.)
“Ship” is a term of art in fan communities deriving from “relationship” that indicates you think two fictional characters in some fictional universe should be together, e.g. “I ship Harry and Hermione” means “I think Harry and Hermione should be together.” A substantial amount of fanfiction is centered on shipping, e.g. you might write fanfiction where Harry and Hermione get together explicitly because you are dissatisfied with the fact that it didn’t happen in canon.
“Shipping wars” are a kind of conflict that can occur in fan communities between people who ship different couples involving the same fictional characters, e.g. Harry/Hermione vs. Ron/Hermione.
“OT3” is short for “One True Threesome”; it derives from “OTP,” which is short for “One True Pairing” and refers to a couple that you ship very strongly, and I guess it means a threesome that you ship very strongly, e.g. Harry/Hermione/Ron. I suppose an OT3 is one way to resolve a shipping war…
-- Scott Aaronson