CFAR, to really succeed at what I see as its mission (bring rationality to the masses), needed...
IMO (and the opinions of Davis and Vaniver, who I was just chatting with), CFAR doesn’t and didn’t have this as much of its mission.
We were and are (from our founding in 2012 through the present) more focused on rationality education for fairly small sets of people who we thought might strongly benefit the world, e.g. by contributing to AI safety or other high-impact things, or by adding enrichment to a community that included such people. (Though with the notable exception of Julia writing the IMO excellent book “Scout Mindset,” which she started while at CFAR and which I suspect reached a somewhat larger audience.)
I do think we should have chosen our name better, and written our fundraising/year-end-report blog posts more clearly, so as to not leave you and a fair number of others with the impression we were aiming to “raise the sanity waterline” broadly. I furthermore think it was not an accident that we failed at this sort of clarity; people seemed to like us and to give us money / positive sentences / etc. when we sounded like we were going to do all the things, and I failed to adjust our course away from that local reward of “sound like you’re doing all the things, so nobody gets mad” to “communicate what’s actually up, even when that looks bad, so you’ll be building on firm ground.”
One Particular Center for Helping A Specific Nerdy Demographic Bridge Common Sense and Singularity Scenarios And Maybe Do Alignment Research Better But Not Necessarily The Only Or Primary Center Doing Those Things
We were and are (from our founding in 2012 through the present) more focused on rationality education for fairly small sets of people who we thought might strongly benefit the world, e.g. by contributing to AI safety or other high-impact things, or by adding enrichment to a community that included such people.
Maybe this was a wrong strategy even given your goals.
Imagine that your goal is to train 10 superheroes, and you have the following options:
A: Identify 10 people with greatest talent, and train them.
B: Focus on scaling. Train 10 000 people.
It seems possible to me that the 10 best heroes in strategy B might actually be better than the 10 heroes in strategy A. Depends on how good you are at identifying talented heroes, whether the ones you choose actually agree to get trained by you, what kinds of people self-select for the scaled-up training, etc.
Furthermore, this is actually a false dilemma. If you find a way to scale, you can still have a part of your team identify and individually approach the talented individuals. They might be even more likely to join if you tell them that you already trained 10 000 people but they will get an individualized elite training.
IMO (and the opinions of Davis and Vaniver, who I was just chatting with), CFAR doesn’t and didn’t have this as much of its mission.
We were and are (from our founding in 2012 through the present) more focused on rationality education for fairly small sets of people who we thought might strongly benefit the world, e.g. by contributing to AI safety or other high-impact things, or by adding enrichment to a community that included such people. (Though with the notable exception of Julia writing the IMO excellent book “Scout Mindset,” which she started while at CFAR and which I suspect reached a somewhat larger audience.)
I do think we should have chosen our name better, and written our fundraising/year-end-report blog posts more clearly, so as to not leave you and a fair number of others with the impression we were aiming to “raise the sanity waterline” broadly. I furthermore think it was not an accident that we failed at this sort of clarity; people seemed to like us and to give us money / positive sentences / etc. when we sounded like we were going to do all the things, and I failed to adjust our course away from that local reward of “sound like you’re doing all the things, so nobody gets mad” to “communicate what’s actually up, even when that looks bad, so you’ll be building on firm ground.”
One Particular Center for Helping A Specific Nerdy Demographic Bridge Common Sense and Singularity Scenarios And Maybe Do Alignment Research Better But Not Necessarily The Only Or Primary Center Doing Those Things
A Center for Trying to Improve Our Non-Ideal Cognitive Inclinations for Navigating to Gigayears
ACTION ICING
I’m liking the uptick in “Gigayears”
Maybe this was a wrong strategy even given your goals.
Imagine that your goal is to train 10 superheroes, and you have the following options:
A: Identify 10 people with greatest talent, and train them.
B: Focus on scaling. Train 10 000 people.
It seems possible to me that the 10 best heroes in strategy B might actually be better than the 10 heroes in strategy A. Depends on how good you are at identifying talented heroes, whether the ones you choose actually agree to get trained by you, what kinds of people self-select for the scaled-up training, etc.
Furthermore, this is actually a false dilemma. If you find a way to scale, you can still have a part of your team identify and individually approach the talented individuals. They might be even more likely to join if you tell them that you already trained 10 000 people but they will get an individualized elite training.