There seems to be a difference of opinion on what applied rationality means. In my view, CFAR is at least one step removed from helping you be more rational at life. In a sense, CFAR is the doctor who gives you the antidepressants which you take to improve your own life, rather than the people who improve your life directly—the tools that let you make your own tools.
There’s no law of the universe saying if you teach someone literary critisism instead of writing it won’t improve their writing skills. The concerns are around how effective it is and whether this will end up causing the curriculum to diverge with reality futher, due to being harder to measure output.
Then there is also the question of bias. There is no control group, no objective measurement, you likely paid thousands of dollars to attend and it was run by people you respect, this is hardly the standard practice of a scientific experiment.
In my view, CFAR is at least one step removed from helping you be more rational at life. In a sense, CFAR is the doctor who gives you the antidepressants which you take to improve your own life, rather than the people who improve your life directly.
Not sure what “improving your life directly” would mean as a rationality technique: if a doctor gives you antidepressants and those make you happier, then that doctor has improved your life directly. (this was very much my own experience of antidepressants, btw: that they pretty much saved my life directly) Unless we’re talking about something like “seeing someone dying of hunger on the street and giving them a safe place to stay” where you directly change someone’s environment, all life-improvement ultimately has to be done by the person themselves. Even therapists aren’t improving your life directly, they are working with you to help you improve your life (granted, there is a stereotype of therapy being something where you can just let someone else fix you, but that’s a misconception; the therapist gives you the tools and then supports you along until you’ve internalized them and no longer need external support, just as CFAR tries to do).
If your position is that “CFAR techniques haven’t been shown to be useful beyond the point of a reasonable doubt”, then that is a sensible position to have.
However, pretty much the same criticisms could also be applied to the benefits of reading rationalist stuff in general. Yet you seem to put a substantial probability on the possibility of things like the Sequences actually being useful for people; if they weren’t, then presumably there would be no point in trying to intentionally build a “rationalist community”.
To me, it looks like while the evidence for the usefulness of both the Sequences and CFAR is shaky, the evidence for CFAR’s usefulness is stronger. CFAR at least did a study that attempted to address the confounding factors. While someone can reasonably be skeptical about their arguments for why they think some useful information can still be squeezed out of the study despite the kinds of issues that you mention, this is still more evidence than we have for the usefulness of any other rationalist stuff.
Thus it seems to me to be inconsistent to think that 1) we should be putting an effort into building a rationality community, but that 2) CFAR’s stuff is probably useless, given that the evidence for CFAR seems stronger than the evidence for a rationality community in particular being useful.
There seems to be a difference of opinion on what applied rationality means. In my view, CFAR is at least one step removed from helping you be more rational at life. In a sense, CFAR is the doctor who gives you the antidepressants which you take to improve your own life, rather than the people who improve your life directly—the tools that let you make your own tools.
There’s no law of the universe saying if you teach someone literary critisism instead of writing it won’t improve their writing skills. The concerns are around how effective it is and whether this will end up causing the curriculum to diverge with reality futher, due to being harder to measure output.
Then there is also the question of bias. There is no control group, no objective measurement, you likely paid thousands of dollars to attend and it was run by people you respect, this is hardly the standard practice of a scientific experiment.
Not sure what “improving your life directly” would mean as a rationality technique: if a doctor gives you antidepressants and those make you happier, then that doctor has improved your life directly. (this was very much my own experience of antidepressants, btw: that they pretty much saved my life directly) Unless we’re talking about something like “seeing someone dying of hunger on the street and giving them a safe place to stay” where you directly change someone’s environment, all life-improvement ultimately has to be done by the person themselves. Even therapists aren’t improving your life directly, they are working with you to help you improve your life (granted, there is a stereotype of therapy being something where you can just let someone else fix you, but that’s a misconception; the therapist gives you the tools and then supports you along until you’ve internalized them and no longer need external support, just as CFAR tries to do).
If your position is that “CFAR techniques haven’t been shown to be useful beyond the point of a reasonable doubt”, then that is a sensible position to have.
However, pretty much the same criticisms could also be applied to the benefits of reading rationalist stuff in general. Yet you seem to put a substantial probability on the possibility of things like the Sequences actually being useful for people; if they weren’t, then presumably there would be no point in trying to intentionally build a “rationalist community”.
To me, it looks like while the evidence for the usefulness of both the Sequences and CFAR is shaky, the evidence for CFAR’s usefulness is stronger. CFAR at least did a study that attempted to address the confounding factors. While someone can reasonably be skeptical about their arguments for why they think some useful information can still be squeezed out of the study despite the kinds of issues that you mention, this is still more evidence than we have for the usefulness of any other rationalist stuff.
Thus it seems to me to be inconsistent to think that 1) we should be putting an effort into building a rationality community, but that 2) CFAR’s stuff is probably useless, given that the evidence for CFAR seems stronger than the evidence for a rationality community in particular being useful.