I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart’s calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who’s longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don’t have a good sense of CFAR’s internal position on whether they should themselves spin off such an organization.)
At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine
Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I’d fund it, at a higher(*) level than I funded CFAR.
(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they’ve come out of the closet on the issue I’m against giving them even 1 cent.
I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart’s calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who’s longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don’t have a good sense of CFAR’s internal position on whether they should themselves spin off such an organization.)
Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I’d fund it, at a higher(*) level than I funded CFAR.
(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they’ve come out of the closet on the issue I’m against giving them even 1 cent.