Is Rationalist Self-Improvement Real?
Cross-posted from Putanumonit.
Imagine that tomorrow everyone on the planet forgets the concept of training basketball skill.
The next day everyone is as good at basketball as they were the previous day, but this talent is assumed to be fixed. No one expects their performance to change over time. No one teaches basketball, although many people continue to play the game for fun.
Geneticists explain that some people are born with better hand-eye coordination and are able to shoot a basketball accurately. Economists explain that highly-paid NBA players have a stronger incentive to hit shots, which explains their improved performance. Psychologists note that people who take more jump shots each day hit a higher percentage and theorize a principal factor of basketball affinity that influences both desire and skill at basketball. Critical race theorists claim that white men’s under-representation in the NBA is due to systemic oppression.
Papers are published, tenure is awarded.
New scientific disciplines emerge and begin studying basketball more systematically. Evolutionary physiologists point out that our ancestors threw stones in a sidearm motion, which explains our lack of adaptation to the different motion of jump shots. Behavioral kinesiologists describe systematic biases in human basketball, such as the tendency to shoot balls with a flatter trajectory and a lower release point than is optimal.
When asked by aspiring basketball players if jump shots can be improved, they all shake their heads and rue that it is human nature to miss shots. A Nobel laureate behavioral kinesiologist tells audiences that even after writing books on biases in basketball his shot did not improve much. Someone publishes a study showing that basketball performance improves after a one-hour training session with schoolchildren, but Shott Ballexander writes a critical takedown pointing out that the effect wore off after a month and could simply be random noise. The field switches to studying “nudges”: ways to design systems so that players hit more shots at the same level of skill. They recommend that the NBA adopt larger hoops.
Papers are published, tenure is awarded.
Then, one day, someone merely looking to get good at basketball, as opposed to getting tenure, comes across these papers. She realizes that the lessons of behavioral kinesiology can be used to improve her jump shot. She practices releasing the ball at the top of her jump from above the forehead with a steep arc. As her shots start swooshing in more people gather at the gym to practice with her. They call themselves Basketballists.
Most people who walk past the gym sneer at the Basketballists. “You call yourselves Basketballists and yet none of you shoot 100%”, they taunt. “You should go to grad school if you want to learn about jump shots.” Some of Basketballists themselves begin to doubt the project, especially since switching to the new shooting techniques lowers their performance at first. “Did you hear what the Center for Applied Basketball is charging for a training camp?”, they mutter, “I bet their results are all due to selection bias.”
The Basketballists insist that the training does help, that they really get better by the day. Their shots hit at a slightly higher rate than before, although this is swamped by the inter-individual variance. How could they know if it works?
A core axiom of Rationality (capitalized to refer to LessWrong version) is that it is a skill that can be improved with time and practice. The names Overcoming Bias and LessWrong reflect this: rationality is a direction, not a fixed point.
What would it mean to “improve at Rationality”? On the epistemic side, to draw a map that more accurately reflects the territory. To be less swayed by bias, make more accurate predictions, avoid error. On the instrumental side, to use their improved epistemics to achieve their goals in life. The two are often conflated, both by Rationalists and skeptics, but the two are also highly correlated — an accurate map gets you to where you’re going.
A core foundation of epistemic rationality is the research on heuristic and biases developed by Daniel Kahneman. The first book in The Sequences is in large part a summary of Kahneman’s work.
Awkwardly for Rationalists, Daniel Kahneman is hugely skeptical of any possible improvement even just in epistemic rationality, especially for whole groups of people. In an astonishing interview with Sam Harris, Kahneman describes bias after bias in human thinking, emotions, and decision making. For every one, Sam asks: How do we get better at this? And for every one, Daniel replies: We don’t, we’ve been telling people about this for decades and nothing has changed, that’s just how people are.
Daniel Kahneman is familiar with CFAR, but as far as I know he has not put as much effort himself into developing a community and curriculum dedicated to improving human rationality. He has described human irrationality, mostly to an audience of psychology undergrads. But psychology undergrads do worse than pigeons at learning a simple probabilistic game, we shouldn’t expect them to learn rationality just by reading about biases. Perhaps if they started reading Slate Star Codex…
Alas, Scott Alexander himself is quite skeptical of Rationalist self-improvement. He agrees that Rationalist thinking can help you make good predictions and occasionally distinguish truth from bullshit, but he’s unconvinced that it’s something one can seriously get better at. Scott is even more skeptical of Rationality’s use for life-optimization.
I told once Scott that I credit Rationality with a lot of the massive improvements in my financial, social, romantic, and mental life that happened to coincide with my discovery of LessWrong. Scott argued that I would do equally well in the absence of Rationality by finding other self-improvement philosophies to pour my intelligence and motivation into, and that these latter two are the root cause of my life getting better. Scott also seems to have been doing very well since he discovered LessWrong, but he credits Rationality with not much more than being a flag that united the community he’s part of.
So: on one side are Yudkowsky, CFAR, and several Rationalists, sharing the belief that Rationality is a learnable skill that can improve the lives of most seekers who step on the path. On the other side are Kahneman, Alexander, several other Rationalists, and all the sneerers, who disagree.
When I surveyed my Twitter followers, the results distributed somewhat predictably:
The optimistic take is that RSI works for most people if they only tried it. The neutral take is that people are good at trying self-improvement philosophies that would work for them. The pessimistic take is that Rationalists are deluded by sunk cost and confirmation bias.
Who’s right? Is Rationality trainable like jump shots or fixed like height? Before reaching any conclusions, let’s try to figure out how why so many smart people who are equally familiar with Rationality disagree so strongly about this important question.
An important crux of disagreement between me and Scott is in the question of what counts as successful Rationalist self-improvement. We can both look at the same facts and come to very different conclusions regarding the utility of Rationality.
Here’s how Scott parses the fact that 15% of SSC readers who were referred by LessWrong have made over $1,000 by investing in cryptocurrency and 3% made over $100,000:
The first mention of Bitcoin on Less Wrong, a post called Making Money With Bitcoin, was in early 2011 – when it was worth 91 cents. Gwern predicted that it could someday be worth “upwards of $10,000 a bitcoin”. […]
This was the easiest test case of our “make good choices” ability that we could possibly have gotten, the one where a multiply-your-money-by-a-thousand-times opportunity basically fell out of the sky and hit our community on its collective head. So how did we do?
I would say we did mediocre. […]
Overall, if this was a test for us, I give the community a C and me personally an F. God arranged for the perfect opportunity to fall into our lap. We vaguely converged onto the right answer in an epistemic sense. And 3 – 15% of us, not including me, actually took advantage of it and got somewhat rich.
Here’s how I would describe it:
Of the 1289 people who were referred to SSC from LessWrong, two thirds are younger than 30, a third are students/interns or otherwise yet to start their careers, and many are for other reasons too broke for it to be actually rational to risk even $100 on something that you saw recommended on a blog. Of the remainder, the majority were not around in the early days when cryptocurrencies were discussed — the median “time in community” on LessWrong surveys is around two years. In any case, “invest in crypto” was never a major theme or universally endorsed in the Rationalist community.
Of those that were around and had the money to invest early enough, a lot lost it all when Mt. Gox was hacked or when Bitcoin crashed in late 2013 and didn’t recover until 2017 or through several other contingencies.
If I had to guess the percent of Rationalists who were even in a position to learn about crypto on LessWrong and make more than $1,000 by following Rationalist advice, I’d say it’s certainly less than 50%. Maybe not much larger than 15%.
Only 8% of Americans own cryptocurrency today. At the absolute highest end estimate, 1% of Americans, and 0.1% of people worldwide, made >$1,000 from crypto. So Rationalists did at least an order of magnitude better than the general population, almost as well as they could’ve done in a perfect world, and also funded MIRI and CFAR with Bitcoin for years ahead. I give the community an A and myself an A.
Now, multiplying money with a simple investment is an incredibly competitive arena of human endeavor, one where we would least expect to find low-hanging fruit that hasn’t been picked. Even if you think Rationalists’ success in that space is modest, that’s still better than the average hedge fund, the actual “professionals”.
For most other goals we care about no efficient market exists to compete with our efforts. Making friends, staying healthy, improving the world with charity, finding compatible partners, managing your happiness and attention, living forever — we should expect some fruit of progress on those to hang lower than a Bitcoin fortune.
Scott blames the failure of Rationality to help primarily on akrasia.
One factor we have to once again come back to is akrasia. I find akrasia in myself and others to be the most important limiting factor to our success. Think of that phrase “limiting factor” formally, the way you’d think of the limiting reagent in chemistry. When there’s a limiting reagent, it doesn’t matter how much more of the other reagents you add, the reaction’s not going to make any more product. Rational decisions are practically useless without the willpower to carry them out. If our limiting reagent is willpower and not rationality, throwing truckloads of rationality into our brains isn’t going to increase success very much.
I take this paragraph to imply a model that looks like this:
[Alex reads LessWrong] → [Alex tries to become less wrong] → [akrasia!] → [Alex doesn’t improve].
I would make a small change to this model:
[Alex reads LessWrong] → [akrasia!] → [Alex doesn’t try to become less wrong] → [Alex doesn’t improve].
A lot of LessWrong is very fun to read, as is all of SlateStarCodex. A large number of people on these sites are just looking to procrastinate during the workday, not to change how their mind works. Only 7% of the people who were engaged enough to fill out the last LessWrong survey have attended a CFAR workshop. Only 20% ever wrote a post, which is some measure of active rather than passive engagement with the material.
In contrast, one person wrote a sequence on trying out applied rationality for 30 days straight: Xiaoyu “The Hammer” He. And he was quite satisfied with the result.
I’m not sure that Scott and I disagree much, but I didn’t get the sense that his essay was saying “just reading about this stuff doesn’t help, you have to actually try”. It also doesn’t explain was he was so skeptical about me crediting my own improvement to Rationality.
Akrasia is discussed a lot on LessWrong, and applied rationality has several tools that help with it. What works for me and my smart friends is not to try and generate willpower but to use lucid moments to design plans that take a lack of willpower into account. Other approaches work for other people. But of course, if someone lacks the willpower to even try and take Rationality improvement seriously, a mere blog post will not help them.
In an essay called Extreme Rationality: It’s Not That Great Scott writes:
The novice goes astray and says, “The Art failed me.”
The master goes astray and says, “I failed my Art.”
Yet one way to fail your Art is to expect more of it than it can deliver.
Scott means to say that Eliezer expects too much of the art in demanding that great Rationalist teachers be great at other things as well. But I think that expecting 50% of LessWrongers filling out a survey to have made thousands of dollars from crypto is setting the bar far higher than Eliezer’s criterion of “Being a math professor at a small university who has published a few original proofs, or a successful day trader who retired after five years to become an organic farmer, or a serial entrepreneur who lived through three failed startups before going back to a more ordinary job as a senior programmer.”
How much improvement does Scott expect? Below is a key quote in his essay, emphasis in the original.
I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1.
Well, how big of a correlation is 0.1?
Here’s the chart of respondents to the SlateStarCodex survey, by self-reported yearly income and whether they were referred from LessWrong (Scott’s criterion for Rationalists).
And here’s the same chart after I made a small change. Can you notice it?
In the second chart, I increased the income of all rationalists by 25%.
The following things are both true:
When you eyeball the group as a whole, the charts look identical. A 25% improvement for a quarter of the people in a group you observe is barely noticeable. The rich stayed rich, the poor stayed poor.
If your own income increased 25% you would certainly notice it. And if the increase came as a result of reading a few blog posts and coming to a few meetups, you would tell everyone you know about this astounding life hack.
The correlation between Rationality and income in Scott’s survey is −0.01. That number goes up to a mere 0.02 after the increase. A correlation of 0.1 is absolutely huge, it would require tripling the income of all Rationalists.
The point isn’t to nitpick Scott’s choice of “correlation = 0.1” as a metaphor. But every measure of success we care about, like impact on the world or popularity or enlightenment, is probably distributed like income is on the survey. And so if Rationality made you 25% more successful it wouldn’t be as obviously visible as Scott thinks it would be — especially since everyone pursues a different vision of success. In this 25% world, the most and least successful people would still be such for reasons other than Rationality. And in this world, Rationality would be one of the most effective self-improvement approaches ever devised. 25% is a lot!
Of course, the 25% increase wouldn’t happen immediately. Most people who take Rationality seriously have been in the community for several years. You get to 25% improvement by getting 3% better each year for 8 years.
Here’s what 3% improvement feels like:
You know what feels crappy? 3% improvement. You busted your ass for a year, trying to get better at dating, at being less of an introvert, at self-soothing your anxiety – and you only managed to get 3% better at it.
If you worked a job where you put in that much time at the office and they gave you a measly 3% raise, you would spit in your boss’s face and walk the fuck out.
And, in fact, that’s what most people do: quit. […]
The model for most self-improvement is usually this:
* You don’t have much of a problem
* You found The Breakthrough that erased all the issues you had
* When you’re done, you’ll be the opposite of what you were. Used to be bad at dating? Now you’ll have your own personal harem. Used to be useless at small talk? Now you’re a fluent raconteur.
Which, when you’ve agonized to scrape together a measly 3% improvement, feels like crap. If you’re burdened with such social anxiety that it takes literally everything you have to go out in public for twenty minutes, make one awkward small talk, and then retreat home to collapse in embarrassment, you think, “Well, this isn’t worth it.”
But most self-improvement isn’t immediate improvement, my friend.
It’s compound interest.
I think that Rationalist self-improvement is like this. You don’t get better at life and rationality after taking one class with Prof. Kahnemann. After 8 years of hard work, you don’t stand out from the crowd even as the results become personally noticeable. But if you discover Rationality in college and stick with it, by the time you’re 55 you will be three times better than what you would have been if you hadn’t compounded these 3% gains year after year, and everyone will notice that.
What’s more, the outcomes don’t scale smoothly with your level of skill. When rare, high leverage opportunities come around, being slightly more rational can make a huge difference. Bitcoin was one such opportunity; meeting my wife was another such one for me. I don’t know what the next one will be: an emerging technology startup? a political upheaval? cryonics? I know that the world is getting weirder faster, and the payouts to Rationality are going to increase commensurately.
There is still the issue of selection bias. Rationalists are not a representative sample of the population by any means. According to the surveys the average LessWrong reader has a vastly higher IQ than average and comes from fields where analytical and systematic thinking is rewarded like engineering, exact sciences, or philosophy. We probably should not conclude that self-improvement through epistemic Rationality will work for many or most people.
But if you’re reading this, you’re probably not most people. The difference is not merely in ability but also in inclination — what you’re curious about and what you’re willing to try. If you’re the sort of person for whom success in life means stepping outside the comfort zone that your parents and high school counselor charted out for you, if you’re willing to explore spaces of consciousness and relationships that other people warn you about, if you compare yourself only to who you were yesterday and not to who someone else is today… If you’re weird like me I think that Rationality can improve your life a lot.
But to get better at basketball, you have to actually show up to the gym.
See also: The Martial Art of Rationality.
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Explaining the Twitter Postrat Scene by 5 Apr 2022 22:23 UTC; 94 points) (
- Review: LessWrong Best of 2018 – Epistemology by 28 Dec 2020 4:32 UTC; 47 points) (
- how has this forum changed your life? by 30 Jan 2020 21:54 UTC; 26 points) (
- how has this forum changed your life? by 30 Jan 2020 21:54 UTC; 26 points) (
- 16 Mar 2022 18:43 UTC; 25 points)'s comment on Book Launch: The Engines of Cognition by (
- Part 3/4: Scrutinising Objections to (Traditional) Abolitionist Approaches by 11 Jan 2023 13:49 UTC; 18 points) (EA Forum;
- 10 Dec 2019 4:27 UTC; 12 points)'s comment on aaq’s Shortform by (
- 30 Dec 2020 3:56 UTC; 10 points)'s comment on Review Voting Thread by (
- The Engines of Cognition, Volume 2 - Los Angeles LW/ACX Meetup #174 (Wednesday, March 2nd) by 1 Mar 2022 3:24 UTC; 5 points) (
- Rational Ottawa—Against Rationality by 10 Dec 2019 4:42 UTC; 1 point) (
Nominating this post because it asks a very important question—it seems worth considering that rationalists should get out of self-improvement altogether and only focus on epistemics—and gives a balanced picture of the discourse. The section on akrasia seems particularly enlightening and possibly the crux on whether or not techniques work, though I still don’t have too much clarity on this. This post also gives me the push necessary to write a long overdue retrospective on my CFAR and Hammertime experience.
Nominating because the idea that rationalists should win (which we can loosely defined as “be better at achieving their goals than non-rationalists”) has been under fire in the community (see for instance Scott’s comment on this post).
I think this discusses the concern nicely, and shows what rational self-improvement may look like in practice, re-framing expectations.
While far from the only one, this was one important influence in my own self-improvement journey. It’s certainly something that comes to mind whenever I think of my own self-improvement philosophy, and when it comes to trying to convince other to do similarly.
EDIT: The Treacherous Path was published in 2020 so never mind.
Thank you (and to alkjash) for the nomination!
I guess I’m not supposed to nominate things I wrote myself, but this post, if published, should really be read along with The Treacherous Path to Rationality. I hope someone nominates that too.
This post is an open invitation to everyone (such as the non-LWers who may read the books to join us). The obvious question is whether this actually works for everyone, and the latter post makes the case for the opposite-mood. I think that in conjunction they offer a much more balanced take on who and what applied rationality is good for.
(The Treacherous Path to Rationality, while a post I would personally nominate, was not published in 2019, so cannot be nominated for this Review.)
D’oh. I’m dumb.