do you think it’s more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it’s more important for rationalists to broaden their network so that rationalists have more examples to learn from?
I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That’s the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important.
I sympathize with your concerns—I would love for the rationality community to be more diverse along all sorts of axes—but I worry they’re predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren’t particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they’re highly uncertain and even though there are no obvious feedback loops, because they’re important enough to work on anyway.
I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart’s calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who’s longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don’t have a good sense of CFAR’s internal position on whether they should themselves spin off such an organization.)
At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine
Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I’d fund it, at a higher(*) level than I funded CFAR.
(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they’ve come out of the closet on the issue I’m against giving them even 1 cent.
I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That’s the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important.
I sympathize with your concerns—I would love for the rationality community to be more diverse along all sorts of axes—but I worry they’re predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren’t particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they’re highly uncertain and even though there are no obvious feedback loops, because they’re important enough to work on anyway.
I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart’s calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who’s longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don’t have a good sense of CFAR’s internal position on whether they should themselves spin off such an organization.)
Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I’d fund it, at a higher(*) level than I funded CFAR.
(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they’ve come out of the closet on the issue I’m against giving them even 1 cent.