Jim Babcock is working on the LW team with Oli, Ray and I :-)
My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).
To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There’s many key object-level insights (e.g. logical inductors and othercoreresearch), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom’s book, several full-time and thoughtful funders in the space—OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.
Perhaps you expected all of this to happen by default, but I’ve be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it seems quite astounding.
(And when I see very surprising events occur that are high in the utility function, I infer agency.)
Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that. I do think that if you found the people responsible for things I listed above, more than 30% would say reading the sequences / going to CFAR was drastically important for them doing anything useful in this direction whatsoever.
I suppose I didn’t really respond to the explicit claim of the post, being
rationality doesn’t seem to be helping us win in individual career or interpersonal/social areas of life
I do agree that’s not the main thing this community is built around, and that we could do better on that if we tried more. But OP did feel a bit like “Huh, Francis Bacon thought he’d figured out a general methodology for understanding the world better, but did his personal relationships get better / did he get rich quick? I don’t think so.” And then missing out that he helped build the frickin’ Royal Society. It’s not true to say the community here hasn’t been tremendously successful while working on some very hard problems.
(Naturally, it’s not clear this progress is enough, and there’s a good chance superintelligence will be an existential catastrophe.)
Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that.
Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.
Perhaps the way forward on the “improve general rationality” is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I’ve interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I’m not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less. I’m not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.
For example, people haven’t become more sane re: AGI, it’s just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there’s another problem as hard and important as noticing AI alignment research as important, it’s not obvious we’ve gotten much better at noticing it, in the past 5 years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well.
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that’s largely because it optimizes for research. (You’d likely have had better professors as an undergrad if you went to a worse university—or at least that was my experience.)
...they can only (do something like) streamline the existing product.
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less.
If that was the implication, I apologize—I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits—not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)
Jim Babcock is working on the LW team with Oli, Ray and I :-)
My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).
To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There’s many key object-level insights (e.g. logical inductors and other core research), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom’s book, several full-time and thoughtful funders in the space—OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.
Perhaps you expected all of this to happen by default, but I’ve be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it seems quite astounding.
(And when I see very surprising events occur that are high in the utility function, I infer agency.)
Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that. I do think that if you found the people responsible for things I listed above, more than 30% would say reading the sequences / going to CFAR was drastically important for them doing anything useful in this direction whatsoever.
I suppose I didn’t really respond to the explicit claim of the post, being
I do agree that’s not the main thing this community is built around, and that we could do better on that if we tried more. But OP did feel a bit like “Huh, Francis Bacon thought he’d figured out a general methodology for understanding the world better, but did his personal relationships get better / did he get rich quick? I don’t think so.” And then missing out that he helped build the frickin’ Royal Society. It’s not true to say the community here hasn’t been tremendously successful while working on some very hard problems.
(Naturally, it’s not clear this progress is enough, and there’s a good chance superintelligence will be an existential catastrophe.)
Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.
Perhaps the way forward on the “improve general rationality” is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I’ve interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I’m not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less. I’m not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.
For example, people haven’t become more sane re: AGI, it’s just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there’s another problem as hard and important as noticing AI alignment research as important, it’s not obvious we’ve gotten much better at noticing it, in the past 5 years.
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that’s largely because it optimizes for research. (You’d likely have had better professors as an undergrad if you went to a worse university—or at least that was my experience.)
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
If that was the implication, I apologize—I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits—not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)