I’m partly echoing badger here, but it’s worth distinguishing between three possible claims: (1) An “art of rationality” that we do not yet have, but that we could plausibly develop with experimentation, measurements, community, etc., can help people. (2) The “art of rationality” that one can obtain by reading OB/LW and trying to really apply its contents to one’s life, can help people. (3) The “art of rationality” that one is likely to accidentally obtain by reading articles about it, e.g. on OB/LW, and seeing what happens to rubs off, can help people.
There are also different notions of “help people” that are worth distinguishing. I’ll share my anticipations for each separately. Yvain or others, tell me where your anticipations match or differ.
Regarding claim (3): My impression is that even the art of rationality one obtains by reading articles about it for entertainment, does have some positive effects on the accuracy of peoples’ beliefs. A couple people reported leaving their religions. Many of us have probably discarded random political or other opinions that we had due to social signaling or happenstance. Yvain and others report “clarity-of-mind benefits”. I’d give reasonable odds that there’s somewhat more benefit than this—some unreliable improvement in peoples’ occasional, major, practical decisions, e.g. about which career track to pursue, and some unreliable improvement in peoples’ ability to see past their own rationalizations in interpersonal conflicts—but (at least with hindsight bias?) probably no improvements in practical skills large enough to show up on Vladimir Nesov’s poll. Does anyone’s anticipations differ, here?
Regarding claim (2): I’d a priori expect better effects from attempts to really practice rationality, and to integrate its thinking skills into one’s bones, than from enjoying chatting about rationality from time to time. A community that reads articles about skateboarding, and discusses skateboarding, will probably still fall over when they try to skateboard twenty feet unless they’ve also actually spent time on skateboards.
As to the empirical data: who here has in fact practiced (2) (e.g., has tried to integrate x-rationality into their actual practical decision-making, as in Yvain’s experiment/technique, or has used x-rationality to make major life decisions, or has spent time listing out their strengths and weaknesses as a rationalist with specific thinking habits that they really work to integrate in different weeks, or etc.)? This is a real question; I’d love data. Eliezer is an obvious example; Yvain cites the impressiveness of Eliezer’s 2001 writings as counter-evidence (and it issome counter-evidence), but: (1) Eliezer, in 2001, had already spent a lot of time learning rationality (though without the heuristics and biases literature); and (2) Eliezer was at that time busy with a course of action that, as he now understands things, would have tended to destroy the world rather than to save it. Due to insufficient rationality, apparently.
I’ve practiced a fair amount of (2), but much less than I could imagine some practicing; and, as I noted in the comment Yvain cited, it seems to have done me some good. Broadly similar results for the handful of others I know who try to get rationality into their bones. Less impressive than I’d like, but I tend to interpret this a a sign we should spend more time on skateboards, and I anticipate that we’ll see more real improvement as we do.
The most important actual helps involve that topic we’re not supposed to discuss here until May, but I’d say we were able to choose a much higher-impact way to help the world than people without x-rationality standardly choose, and that we’re able to actually think usefully about a subject where most conversations degenerate into storytelling, availability heuristics, attaching overmuch weight to specific conjunctions, etc. Which, if there’s any non-negligible chance we’re right, is immensely practical. But we’re also somewhat better at strategicness about actually exercising, about using social interaction patterns that work better than the ones we were accidentally using previously (though far from as well as the ones the best people use), about choosing college or career tracks that have better expected results, etc.
Folks with more data here (positive or negative), please share.
Regarding claim (1): I guess I wouldn’t be surprised by anything from “massive practical help, at least from particular skilled/lucky dojos that get on good tracks” to “not much help at all”. But if we do get “not much help at all”, I’ll feel like there was a thing we could have done, and we didn’t manage to do it. There are loads of ridiculously stupid kinds of decision-making that most people do, and it would be strange if there were no way we could get visible practical benefit from improving on that. Details in later comments.
I agree with almost everything here, with the following caveats:
I. The practical benefits we get from (3) are (I think I’m agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.
II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I’ve seen a few suggestions for “techniques”, but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing ‘techniques’ were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I’ve tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to “rationality practice”, nor would I really know what to do with that half hour if I did. I’d like to know more about what you do and what you think has helped.
III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn’t impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my “conversion” to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to “convert” as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer’s been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn’t, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.
IV. Given fifty years to improve the Art, I also wouldn’t be surprised with anything from “massive practical help” to “not much help at all”. I don’t know exactly what you mean by “ridiculously stupid decision-making that most people do”, but are you sure it’s something that should be solved with x-rationality as opposed to normal rationality?
I don’t know exactly what you mean by “ridiculously stupid decision-making that most people do”, but are you sure it’s something that should be solved with x-rationality as opposed to normal rationality?
I’m sure it’s something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most “intelligent, science-literate” people do now; most people hardly try at all). As to heuristics and biases, and probability theory… I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.
You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn’t impress me.
The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”.
As to the evidence and its significance:
Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using—certainly he didn’t get it from Eliezer—but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways).
Within transhumanism… I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But…
Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much);
Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking what use of time/money will actually help the world, tend to correlate, and tend to lead to modes of action that actually do help the world.
I’m partly echoing badger here, but it’s worth distinguishing between three possible claims:
(1) An “art of rationality” that we do not yet have, but that we could plausibly develop with experimentation, measurements, community, etc., can help people.
(2) The “art of rationality” that one can obtain by reading OB/LW and trying to really apply its contents to one’s life, can help people.
(3) The “art of rationality” that one is likely to accidentally obtain by reading articles about it, e.g. on OB/LW, and seeing what happens to rubs off, can help people.
There are also different notions of “help people” that are worth distinguishing. I’ll share my anticipations for each separately. Yvain or others, tell me where your anticipations match or differ.
Regarding claim (3):
My impression is that even the art of rationality one obtains by reading articles about it for entertainment, does have some positive effects on the accuracy of peoples’ beliefs. A couple people reported leaving their religions. Many of us have probably discarded random political or other opinions that we had due to social signaling or happenstance. Yvain and others report “clarity-of-mind benefits”. I’d give reasonable odds that there’s somewhat more benefit than this—some unreliable improvement in peoples’ occasional, major, practical decisions, e.g. about which career track to pursue, and some unreliable improvement in peoples’ ability to see past their own rationalizations in interpersonal conflicts—but (at least with hindsight bias?) probably no improvements in practical skills large enough to show up on Vladimir Nesov’s poll. Does anyone’s anticipations differ, here?
Regarding claim (2):
I’d a priori expect better effects from attempts to really practice rationality, and to integrate its thinking skills into one’s bones, than from enjoying chatting about rationality from time to time. A community that reads articles about skateboarding, and discusses skateboarding, will probably still fall over when they try to skateboard twenty feet unless they’ve also actually spent time on skateboards.
As to the empirical data: who here has in fact practiced (2) (e.g., has tried to integrate x-rationality into their actual practical decision-making, as in Yvain’s experiment/technique, or has used x-rationality to make major life decisions, or has spent time listing out their strengths and weaknesses as a rationalist with specific thinking habits that they really work to integrate in different weeks, or etc.)? This is a real question; I’d love data. Eliezer is an obvious example; Yvain cites the impressiveness of Eliezer’s 2001 writings as counter-evidence (and it is some counter-evidence), but: (1) Eliezer, in 2001, had already spent a lot of time learning rationality (though without the heuristics and biases literature); and (2) Eliezer was at that time busy with a course of action that, as he now understands things, would have tended to destroy the world rather than to save it. Due to insufficient rationality, apparently.
I’ve practiced a fair amount of (2), but much less than I could imagine some practicing; and, as I noted in the comment Yvain cited, it seems to have done me some good. Broadly similar results for the handful of others I know who try to get rationality into their bones. Less impressive than I’d like, but I tend to interpret this a a sign we should spend more time on skateboards, and I anticipate that we’ll see more real improvement as we do.
The most important actual helps involve that topic we’re not supposed to discuss here until May, but I’d say we were able to choose a much higher-impact way to help the world than people without x-rationality standardly choose, and that we’re able to actually think usefully about a subject where most conversations degenerate into storytelling, availability heuristics, attaching overmuch weight to specific conjunctions, etc. Which, if there’s any non-negligible chance we’re right, is immensely practical. But we’re also somewhat better at strategicness about actually exercising, about using social interaction patterns that work better than the ones we were accidentally using previously (though far from as well as the ones the best people use), about choosing college or career tracks that have better expected results, etc.
Folks with more data here (positive or negative), please share.
Regarding claim (1):
I guess I wouldn’t be surprised by anything from “massive practical help, at least from particular skilled/lucky dojos that get on good tracks” to “not much help at all”. But if we do get “not much help at all”, I’ll feel like there was a thing we could have done, and we didn’t manage to do it. There are loads of ridiculously stupid kinds of decision-making that most people do, and it would be strange if there were no way we could get visible practical benefit from improving on that. Details in later comments.
I agree with almost everything here, with the following caveats:
I. The practical benefits we get from (3) are (I think I’m agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.
II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I’ve seen a few suggestions for “techniques”, but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing ‘techniques’ were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I’ve tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to “rationality practice”, nor would I really know what to do with that half hour if I did. I’d like to know more about what you do and what you think has helped.
III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn’t impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my “conversion” to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to “convert” as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer’s been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn’t, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.
IV. Given fifty years to improve the Art, I also wouldn’t be surprised with anything from “massive practical help” to “not much help at all”. I don’t know exactly what you mean by “ridiculously stupid decision-making that most people do”, but are you sure it’s something that should be solved with x-rationality as opposed to normal rationality?
I’m sure it’s something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most “intelligent, science-literate” people do now; most people hardly try at all). As to heuristics and biases, and probability theory… I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.
The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”.
As to the evidence and its significance:
Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using—certainly he didn’t get it from Eliezer—but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways).
Within transhumanism… I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But…
Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much);
Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking what use of time/money will actually help the world, tend to correlate, and tend to lead to modes of action that actually do help the world.