Bad reasons for a rationalist to lose

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn’t look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can’t be expected to win yet. It just may be deeply wrongheaded.

I submit that we don’t “need” (emphasis in original) this stuff, it’d just be super cool if we could get it. We don’t need to know that the next brain hack we try will work, and we don’t need to know that it’s general enough that it’ll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn’t other-optimizing, it’s a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested “I want to hear how I can overcome akrasia—how I can have more willpower, or get more done with less mental pain”. I’d push cost in with something like “to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time”.

  • We need some likelihood estimates:

    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?

    • Chance of a random brain hack working on subsequent trials (after the third—the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0

    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law’s homebrew brain hack is less well tried)

    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)

    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)

    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?

    • Chance that someone else will read up “on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas”, all soon: ? (pretty small?)

    • What else do we need to know?

  • We need some time/​cost estimates (these will vary greatly by proposed brain hack):

    • Time required to stage a personal experiment on the hack: ?

    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?

    • What else do we need?

… and, what don’t we need?

  • A way to reject the placebo effect—if it wins, use it. If it wins for you but wouldn’t win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task—it’s irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?