Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that.
Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.
Perhaps the way forward on the “improve general rationality” is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I’ve interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I’m not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less. I’m not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.
For example, people haven’t become more sane re: AGI, it’s just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there’s another problem as hard and important as noticing AI alignment research as important, it’s not obvious we’ve gotten much better at noticing it, in the past 5 years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well.
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that’s largely because it optimizes for research. (You’d likely have had better professors as an undergrad if you went to a worse university—or at least that was my experience.)
...they can only (do something like) streamline the existing product.
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less.
If that was the implication, I apologize—I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits—not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)
Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.
Perhaps the way forward on the “improve general rationality” is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I’ve interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I’m not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less. I’m not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.
For example, people haven’t become more sane re: AGI, it’s just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there’s another problem as hard and important as noticing AI alignment research as important, it’s not obvious we’ve gotten much better at noticing it, in the past 5 years.
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that’s largely because it optimizes for research. (You’d likely have had better professors as an undergrad if you went to a worse university—or at least that was my experience.)
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
If that was the implication, I apologize—I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits—not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)