1. CFAR managed to create a workshop which is, in my view, reasonably balanced—and subsequently beneficial for most people.
In my view, one of the main problems with “teaching rationality” is people’s minds often have parts which are “broken” in a compatible way, making the whole work. My goto example is “planning fallacy” and “hyperbolic discounting”: because in decision making, typically only a product term of both appears, they can largely cancel out, and practical decisions of someone exhibiting both biases could be closer to optimum than people expect. Teach someone just how to be properly calibrated in planning … and you can make them worse off.
Some of the dimensions to balance I mean here could be labelled eg “S2 getting better S1 data access”, “S2 getting better S1 write access”, “S1 getting better communication channel to S2”, “striving for internal cooperation and kindness”, “get good at reflectivity”, “don’t get lost infinitely reflecting”. (all these labels are fake but useful)
(In contrast, a component which was in my view off-balance is “group rationality”)
This is non-trivial, and I’m actually worried about e.g. various EA community building or outreach events reusing parts of CFAR curriculum, but selecting only parts which e.g. help S2 “rewrite” S1.
II. Impressively good pedagogy of some classes
III. Exploration going on, to a decent degree. At least in Europe, every run was a bit different, both with new classes, but also significant variance between versions of the same class.* (Actually I don’t know if this was true for the US workshops at the same time/the whole time)
IV. Heroic effort to keep good epistemics, which often succeeded
V. In my view some amount of “self-help” is actually helpful.
VI. Container-creation: bringing in interesting groups of people in the same building
VII. Overall, I think the amount of pedagogical knowledge created is impressive, given the size of the org.
[wrote these points before reading your list]
1. CFAR managed to create a workshop which is, in my view, reasonably balanced—and subsequently beneficial for most people.
In my view, one of the main problems with “teaching rationality” is people’s minds often have parts which are “broken” in a compatible way, making the whole work. My goto example is “planning fallacy” and “hyperbolic discounting”: because in decision making, typically only a product term of both appears, they can largely cancel out, and practical decisions of someone exhibiting both biases could be closer to optimum than people expect. Teach someone just how to be properly calibrated in planning … and you can make them worse off.
Some of the dimensions to balance I mean here could be labelled eg “S2 getting better S1 data access”, “S2 getting better S1 write access”, “S1 getting better communication channel to S2”, “striving for internal cooperation and kindness”, “get good at reflectivity”, “don’t get lost infinitely reflecting”. (all these labels are fake but useful)
(In contrast, a component which was in my view off-balance is “group rationality”)
This is non-trivial, and I’m actually worried about e.g. various EA community building or outreach events reusing parts of CFAR curriculum, but selecting only parts which e.g. help S2 “rewrite” S1.
II. Impressively good pedagogy of some classes
III. Exploration going on, to a decent degree. At least in Europe, every run was a bit different, both with new classes, but also significant variance between versions of the same class.* (Actually I don’t know if this was true for the US workshops at the same time/the whole time)
IV. Heroic effort to keep good epistemics, which often succeeded
V. In my view some amount of “self-help” is actually helpful.
VI. Container-creation: bringing in interesting groups of people in the same building
VII. Overall, I think the amount of pedagogical knowledge created is impressive, given the size of the org.