I think you are not looking in the right places, as the groups of rationalists I know are doing incredibly well for themselves—tenure-track positions at major universities, promotions to senior positions in US government agencies, incredibly well paid jobs doing EA-aligned research in machine learning and AI, huge amounts of money being sent to the rationalist-sphere AI risk research agendas that people were routinely dismissing a few years ago, etc.
To evaluate this more dispassionately, however, I’d suggest looking at the people who posted high-karma posts in 2009, and seeing what the posters are doing now. I’ll try that here, but I don’t know what some of these people are doing now. They seem to be a overall high-achieving group. (But we don’t have a baseline.)
https://www.greaterwrong.com/archive/2009 - Page 1: I’m seeing Eliezer, (he seems to have done well,) Hal Finney (unfortunately deceased, but had he lived a bit longer he would have been a multi-multi millionaire for being an early bitcoin holder / developer,) Scott Alexander (I think his blog is doing well enough,) Phil Goetz - ?, Anna Salomon (helping run CFAR,) “Liron” - (?, but he’s now running https://relationshiphero.com/ and seems to have done decently as a serial entrepreneur,) Wei Dei, (A fairly big name in cryptocurrency,) cousin_it - ?, CarlShulman, doing a bunch of existential risk work with FHI and other organizations, Alicorn (now a writer and “Immortal bisexual polyamorous superbeing”), HughRistik - ?, Orthonormal (Still around, but ?), jimrandomh (James Babcock - ?), AllanCrossman, (http://allancrossman.com/ - ?) and Psychohistorian (Eitan Pechenick, Academia)
Jim Babcock is working on the LW team with Oli, Ray and I :-)
My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).
To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There’s many key object-level insights (e.g. logical inductors and othercoreresearch), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom’s book, several full-time and thoughtful funders in the space—OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.
Perhaps you expected all of this to happen by default, but I’ve be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it seems quite astounding.
(And when I see very surprising events occur that are high in the utility function, I infer agency.)
Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that. I do think that if you found the people responsible for things I listed above, more than 30% would say reading the sequences / going to CFAR was drastically important for them doing anything useful in this direction whatsoever.
I suppose I didn’t really respond to the explicit claim of the post, being
rationality doesn’t seem to be helping us win in individual career or interpersonal/social areas of life
I do agree that’s not the main thing this community is built around, and that we could do better on that if we tried more. But OP did feel a bit like “Huh, Francis Bacon thought he’d figured out a general methodology for understanding the world better, but did his personal relationships get better / did he get rich quick? I don’t think so.” And then missing out that he helped build the frickin’ Royal Society. It’s not true to say the community here hasn’t been tremendously successful while working on some very hard problems.
(Naturally, it’s not clear this progress is enough, and there’s a good chance superintelligence will be an existential catastrophe.)
Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that.
Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.
Perhaps the way forward on the “improve general rationality” is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I’ve interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I’m not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less. I’m not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.
For example, people haven’t become more sane re: AGI, it’s just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there’s another problem as hard and important as noticing AI alignment research as important, it’s not obvious we’ve gotten much better at noticing it, in the past 5 years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well.
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that’s largely because it optimizes for research. (You’d likely have had better professors as an undergrad if you went to a worse university—or at least that was my experience.)
...they can only (do something like) streamline the existing product.
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less.
If that was the implication, I apologize—I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits—not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)
Just to fill in the slot: in 2009 I was living in Moscow and mostly just partying and enjoying life, and in 2018 I’m living in Zurich with my wife and five kids. Was very happy with my life then, and am very happy now. Doing nicely in terms of money, but no big accomplishments if that’s what you’re asking about. And no, I wouldn’t attribute it to LW, just normal life going on.
Update: I messaged Dr. Pechenick on LinkedIn, and I regret to report that he is not in fact Psychohistorian on LessWrong, but Psychohistorian on Twitter. Still, hell of a coincidence.
I think you are not looking in the right places, as the groups of rationalists I know are doing incredibly well for themselves—tenure-track positions at major universities, promotions to senior positions in US government agencies, incredibly well paid jobs doing EA-aligned research in machine learning and AI, huge amounts of money being sent to the rationalist-sphere AI risk research agendas that people were routinely dismissing a few years ago, etc.
To evaluate this more dispassionately, however, I’d suggest looking at the people who posted high-karma posts in 2009, and seeing what the posters are doing now. I’ll try that here, but I don’t know what some of these people are doing now. They seem to be a overall high-achieving group. (But we don’t have a baseline.)
https://www.greaterwrong.com/archive/2009 - Page 1: I’m seeing Eliezer, (he seems to have done well,) Hal Finney (unfortunately deceased, but had he lived a bit longer he would have been a multi-multi millionaire for being an early bitcoin holder / developer,) Scott Alexander (I think his blog is doing well enough,) Phil Goetz - ?, Anna Salomon (helping run CFAR,) “Liron” - (?, but he’s now running https://relationshiphero.com/ and seems to have done decently as a serial entrepreneur,) Wei Dei, (A fairly big name in cryptocurrency,) cousin_it - ?, CarlShulman, doing a bunch of existential risk work with FHI and other organizations, Alicorn (now a writer and “Immortal bisexual polyamorous superbeing”), HughRistik - ?, Orthonormal (Still around, but ?), jimrandomh (James Babcock - ?), AllanCrossman, (http://allancrossman.com/ - ?) and Psychohistorian (Eitan Pechenick, Academia)
Jim Babcock is working on the LW team with Oli, Ray and I :-)
My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).
To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There’s many key object-level insights (e.g. logical inductors and other core research), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom’s book, several full-time and thoughtful funders in the space—OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.
Perhaps you expected all of this to happen by default, but I’ve be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it seems quite astounding.
(And when I see very surprising events occur that are high in the utility function, I infer agency.)
Agreed that rationality work has not seen much progress, and I’d personally like to move the needle forward on that. I do think that if you found the people responsible for things I listed above, more than 30% would say reading the sequences / going to CFAR was drastically important for them doing anything useful in this direction whatsoever.
I suppose I didn’t really respond to the explicit claim of the post, being
I do agree that’s not the main thing this community is built around, and that we could do better on that if we tried more. But OP did feel a bit like “Huh, Francis Bacon thought he’d figured out a general methodology for understanding the world better, but did his personal relationships get better / did he get rich quick? I don’t think so.” And then missing out that he helped build the frickin’ Royal Society. It’s not true to say the community here hasn’t been tremendously successful while working on some very hard problems.
(Naturally, it’s not clear this progress is enough, and there’s a good chance superintelligence will be an existential catastrophe.)
Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.
Perhaps the way forward on the “improve general rationality” is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.
I’m not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I’ve interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I’m not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.
Also: I think you’re implying that AI is a really huge deal problem and rationality is less. I’m not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.
For example, people haven’t become more sane re: AGI, it’s just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there’s another problem as hard and important as noticing AI alignment research as important, it’s not obvious we’ve gotten much better at noticing it, in the past 5 years.
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that’s largely because it optimizes for research. (You’d likely have had better professors as an undergrad if you went to a worse university—or at least that was my experience.)
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
If that was the implication, I apologize—I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits—not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)
Just to fill in the slot: in 2009 I was living in Moscow and mostly just partying and enjoying life, and in 2018 I’m living in Zurich with my wife and five kids. Was very happy with my life then, and am very happy now. Doing nicely in terms of money, but no big accomplishments if that’s what you’re asking about. And no, I wouldn’t attribute it to LW, just normal life going on.
Holy shit. Pyschohistorian taught my AP Calc BC class. I am in shock.
Update: I messaged Dr. Pechenick on LinkedIn, and I regret to report that he is not in fact Psychohistorian on LessWrong, but Psychohistorian on Twitter. Still, hell of a coincidence.