Is there a large benefit to being in a rationalist hub versus living in a rationalist house? Personally I’m pretty sure my answer to that question would be “no”, but I’m curious how others feel.
On this general question, Eliezer talks about this (in a more a priori way, since there was no in-person hub at the time) in the Sequences post Can Humanism Match Religion’s Output? His claims there broadly match my experience.
Excerpt:
Really, I suspect that what’s going on here has less to do with the motivating power of eternal damnation, and a lot more to do with the motivating power of physically meeting other people who share your cause. The power, in other words, of being physically present at church and having religious neighbors.
This is a problem for the rationalist community in its present stage of growth, because we are rare and geographically distributed way the hell all over the place. If all the readers of this blog lived within a 5-mile radius of each other, I bet we’d get a lot more done, not for reasons of coordination but just sheer motivation.
The benefits are mostly about longterm life trajectory stuff – more new organizations or projects form, there are more mentors available to help people grow, you’re more likely to be able to get hired at a rationalist org, etc.
(these don’t end up applying to everyone who lives in the Bay either – we have more mentors and job openings here, but still not enough to handle an infinite number of people)
I agree that in general, rationalists have a valuable package of insights that isn’t found elsewhere, but “this package is deep enough and important enough to necessitate working directly with experts on an ongoing basis” is a very high bar. A lot of the relevant knowledge in x-risk and rationality can be obtained more cheaply by reading LW and papers, visiting an event or workshop a few times a year, etc. I agree that there are probably some subsets of x-risk and rationality that are such that rationalists happen to hold the best knowledge about it, and that if those are the things you’re the most interested in, then it might pay to work with/be mentored by rationalists in particular. But it looks to me like there are plausibly also large subsets of both x-risk and rationality for which the best knowledge is found elsewhere, and for a person interested in those subsets in particular, it’s enough to extract the other rationalist insights by shallower means than constant interaction.
For x-risk: there are many fields in which scholarship and amount of experience within a field does not actually develop real expertise: “experts” perform little better than novices. “X-risk in general” looks exactly like the kind of field where this applies, as it’s much closer to the “bad performance” than “good performance” column of Shanteau’s table of which domains allow for expertise. In particular, developing expertise requires repeated objective feedback so as to allow you to revise your predictions and models: for x-risk as well as other similar domains (intelligence analysis etc.), such feedback is missing and mostly subjective when it exists. So the default assumption should be that people who have devoted a lot of time studying x-risk in general are not going to be much better than people who only have limited exposure.
Now if you move to some specific subset of x-risk work, such as AI risk or biosecurity, you might get better feedback loops. I talked about this in a presentation at GoCAS that I held a couple of years ago, and suggested that something like this is the way to go. Worth noting that despite spending two months hanging out with x-risk scholars in person there, I didn’t feel like I’d have gotten a lot of valuable new information about x-risk in general.
But then the question is “does some community have strong feedback loops about this subfield of x-risk”, not “is the community rationalist”. It does seem plausible to me that with regard to at least some subsets of AI risk work in particular, the people who are most tightly embedded with useful work also tend to be rationalists to at least some extent. On the other hand, if one was concerned with e.g. biosecurity, then I do not see rationalists as being particularly engaged with that field, and it seems plausible that the best expertise for that subfield of x-risk would be found elsewhere.
For rationality: It’s true that I have gotten a lot of rationality out of the Sequences, LW in general, CFAR etc. On the other hand, the Sequences/LW value was obviously gotten through online interaction. And I feel like I have also gotten a lot of rationality out of IFS training, meditation teachers, methodology discussion of various disciplines, etc. So again it seems like the relevant question is less “are you interested in rationality in general” and more “what kind of rationality are you most interested in and where are the best people with the particular skillset that most relates to that”: some of those may be found in the rationalist community, but for many subsets of rationality, the best teachers are probably elsewhere.
Original: Hm, in my mind that stuff could largely be done remotely, but I’m probably underestimating the importance of in person interaction.
New: This does make sense. After seeing Raemon’s comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to “do networking stuff” remotely, in practice that just doesn’t really happen.
I think you’re underestimating serendipity. In a single rationalist house in a non-hub, you’ll have the benefit of being around a couple cool people who think like you (to a first approximation), but you don’t have many opportunities to make new rationalist connections like you would in a larger hub. I’m not really one to proactively reach out to new people, so having the opportunity to meet them at parties or hangouts or through mutual friends has shaped my experience a a lot.
Plus, I’ve been really grateful for the opportunities to work at value-aligned organizations, which I almost certainly wouldn’t have had elsewhere.
you’ll have the benefit of being around a couple cool people who think like you (to a first approximation), but you don’t have many opportunities to make new rationalist connections like you would in a larger hub.
Does “think like you” mean “rationalist”, here? I would assume that finding “people who think like you” would be relatively straightforward in e.g. any large city with a major university. That’s been my experience in Helsinki (1.23 M inhabitants in the general urban area and a couple of universities) at least. Though it’s true that most of those people aren’t very familiar with Less Wrong or the rationalist scene (even if some are).
What are the benefits you have in mind of making other connections? Intellectual? Hedonic? Networking?
Intellectual: To me, online discussion does a pretty good job providing diversity of opinion and conversation.
Hedonic: I’m under the impression that the 80⁄20 principle usually applies heavily, in the sense of the first 2 people you spend the most time with providing a huge chunk of the value, the next 5 providing a good amount, then there’s drop off, etc. If that’s true, then the marginal rationalist interactions would be filling in the tail end and not providing too much value.
Networking: This does make sense. After seeing Raemon’s comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to “do networking stuff” remotely, in practice that just doesn’t really happen.
Is there a large benefit to being in a rationalist hub versus living in a rationalist house? Personally I’m pretty sure my answer to that question would be “no”, but I’m curious how others feel.
On this general question, Eliezer talks about this (in a more a priori way, since there was no in-person hub at the time) in the Sequences post Can Humanism Match Religion’s Output? His claims there broadly match my experience.
Excerpt:
Hm, thinking about it now that does make sense.
The benefits are mostly about longterm life trajectory stuff – more new organizations or projects form, there are more mentors available to help people grow, you’re more likely to be able to get hired at a rationalist org, etc.
(these don’t end up applying to everyone who lives in the Bay either – we have more mentors and job openings here, but still not enough to handle an infinite number of people)
Are rationalist organizations/mentors likely to be significantly better than non-rationalist ones?
If you’re trying to get mentored in x-risk or rationality, yes.
Even those don’t seem completely obvious to me.
I agree that in general, rationalists have a valuable package of insights that isn’t found elsewhere, but “this package is deep enough and important enough to necessitate working directly with experts on an ongoing basis” is a very high bar. A lot of the relevant knowledge in x-risk and rationality can be obtained more cheaply by reading LW and papers, visiting an event or workshop a few times a year, etc. I agree that there are probably some subsets of x-risk and rationality that are such that rationalists happen to hold the best knowledge about it, and that if those are the things you’re the most interested in, then it might pay to work with/be mentored by rationalists in particular. But it looks to me like there are plausibly also large subsets of both x-risk and rationality for which the best knowledge is found elsewhere, and for a person interested in those subsets in particular, it’s enough to extract the other rationalist insights by shallower means than constant interaction.
For x-risk: there are many fields in which scholarship and amount of experience within a field does not actually develop real expertise: “experts” perform little better than novices. “X-risk in general” looks exactly like the kind of field where this applies, as it’s much closer to the “bad performance” than “good performance” column of Shanteau’s table of which domains allow for expertise. In particular, developing expertise requires repeated objective feedback so as to allow you to revise your predictions and models: for x-risk as well as other similar domains (intelligence analysis etc.), such feedback is missing and mostly subjective when it exists. So the default assumption should be that people who have devoted a lot of time studying x-risk in general are not going to be much better than people who only have limited exposure.
Now if you move to some specific subset of x-risk work, such as AI risk or biosecurity, you might get better feedback loops. I talked about this in a presentation at GoCAS that I held a couple of years ago, and suggested that something like this is the way to go. Worth noting that despite spending two months hanging out with x-risk scholars in person there, I didn’t feel like I’d have gotten a lot of valuable new information about x-risk in general.
But then the question is “does some community have strong feedback loops about this subfield of x-risk”, not “is the community rationalist”. It does seem plausible to me that with regard to at least some subsets of AI risk work in particular, the people who are most tightly embedded with useful work also tend to be rationalists to at least some extent. On the other hand, if one was concerned with e.g. biosecurity, then I do not see rationalists as being particularly engaged with that field, and it seems plausible that the best expertise for that subfield of x-risk would be found elsewhere.
For rationality: It’s true that I have gotten a lot of rationality out of the Sequences, LW in general, CFAR etc. On the other hand, the Sequences/LW value was obviously gotten through online interaction. And I feel like I have also gotten a lot of rationality out of IFS training, meditation teachers, methodology discussion of various disciplines, etc. So again it seems like the relevant question is less “are you interested in rationality in general” and more “what kind of rationality are you most interested in and where are the best people with the particular skillset that most relates to that”: some of those may be found in the rationalist community, but for many subsets of rationality, the best teachers are probably elsewhere.
Original: Hm, in my mind that stuff could largely be done remotely, but I’m probably underestimating the importance of in person interaction.New: This does make sense. After seeing Raemon’s comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to “do networking stuff” remotely, in practice that just doesn’t really happen.
I think you’re underestimating serendipity. In a single rationalist house in a non-hub, you’ll have the benefit of being around a couple cool people who think like you (to a first approximation), but you don’t have many opportunities to make new rationalist connections like you would in a larger hub. I’m not really one to proactively reach out to new people, so having the opportunity to meet them at parties or hangouts or through mutual friends has shaped my experience a a lot.
Plus, I’ve been really grateful for the opportunities to work at value-aligned organizations, which I almost certainly wouldn’t have had elsewhere.
Does “think like you” mean “rationalist”, here? I would assume that finding “people who think like you” would be relatively straightforward in e.g. any large city with a major university. That’s been my experience in Helsinki (1.23 M inhabitants in the general urban area and a couple of universities) at least. Though it’s true that most of those people aren’t very familiar with Less Wrong or the rationalist scene (even if some are).
What are the benefits you have in mind of making other connections? Intellectual? Hedonic? Networking?
Intellectual: To me, online discussion does a pretty good job providing diversity of opinion and conversation.
Hedonic: I’m under the impression that the 80⁄20 principle usually applies heavily, in the sense of the first 2 people you spend the most time with providing a huge chunk of the value, the next 5 providing a good amount, then there’s drop off, etc. If that’s true, then the marginal rationalist interactions would be filling in the tail end and not providing too much value.
Networking: This does make sense. After seeing Raemon’s comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to “do networking stuff” remotely, in practice that just doesn’t really happen.