I think you should try being a bit more explicit about what your actual goal is.
Improving the rationality of the bottom half is much different than improving the rationality of the top percent (or any other subset).
I suspect that if you came up with some instrumental rationality metric measuring actual effectiveness when not just following the herd it would follow a power law distribution, and when you’re dealing with power laws it is critical to determine which tail you should spend all your effort on.
Of course, people do get to free ride on a lot of decisions, and this complicates things quite a bit, but it doesn’t change the fact that your target audience is a critical choice that needs some real analysis and a more specific goal.
My guess is that more intelligent/rational people will be easier to get a hook in (less bootstrapping problem) and have the potential for more improvement and influence, and that this is where all the effort should be spent.
My immediate efforts are focused on roughly the top 0.1% of high school students measured by general intelligence. If I could I would focus on the top 0.01%.
I agree this should be described in the original post.
Is there another sense in which I should be more explicit about my goals?
Your goals could be something like improving personal relationships, allowing a better legal system to take over, or generating funding or talent for SIAI, which would require increasingly good rationalists.
If I had to guess, I’d guess that you’re going for the ‘other’ category because it includes all sorts of important things which add up/may dominate, and figure that top 0.1%-0.01% is about right.
That doesn’t sound like a bad answer at all, and I’m not sure how I’d change it. I’m just emphasizing the point that the choice matters a lot, and that extra thinking helps.
It’s the kind of thing I’d want to have a discussion about at the next LW meetup, and by the time I start actively teaching people I’d want to have thought about it enough that I can’t bring someone up to speed with my full reasoning in a minute or two.
I think you should try being a bit more explicit about what your actual goal is.
Improving the rationality of the bottom half is much different than improving the rationality of the top percent (or any other subset).
I suspect that if you came up with some instrumental rationality metric measuring actual effectiveness when not just following the herd it would follow a power law distribution, and when you’re dealing with power laws it is critical to determine which tail you should spend all your effort on.
Of course, people do get to free ride on a lot of decisions, and this complicates things quite a bit, but it doesn’t change the fact that your target audience is a critical choice that needs some real analysis and a more specific goal.
My guess is that more intelligent/rational people will be easier to get a hook in (less bootstrapping problem) and have the potential for more improvement and influence, and that this is where all the effort should be spent.
My immediate efforts are focused on roughly the top 0.1% of high school students measured by general intelligence. If I could I would focus on the top 0.01%.
I agree this should be described in the original post.
Is there another sense in which I should be more explicit about my goals?
Your goals could be something like improving personal relationships, allowing a better legal system to take over, or generating funding or talent for SIAI, which would require increasingly good rationalists.
If I had to guess, I’d guess that you’re going for the ‘other’ category because it includes all sorts of important things which add up/may dominate, and figure that top 0.1%-0.01% is about right.
That doesn’t sound like a bad answer at all, and I’m not sure how I’d change it. I’m just emphasizing the point that the choice matters a lot, and that extra thinking helps.
It’s the kind of thing I’d want to have a discussion about at the next LW meetup, and by the time I start actively teaching people I’d want to have thought about it enough that I can’t bring someone up to speed with my full reasoning in a minute or two.