A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.
Although my understanding of network science is abecedarian, I’m unsure of both whether this feature is diagnostic (i.e. divergence from power-law distributions should be a warning sign) or whether we in fact observe overdispersion even relative to a power law. The latter first.
1) ‘One or two big hubs, then lots of very small groups’ is close to what a power law distribution should look like. If anything, it’s plausible the current topology doesn’t look power-lawy enough. The EA community overlaps with the rationalist community, and it has somewhat better data on topology: If anything the hub sizes of the EA community are pretty even. This also agrees with my impression: although the bay area can be identified as the biggest EA hub, there are similar or at least middle sized hubs elsewhere (Oxford, Cambridge (UK), London, Seattle, Berlin, Geneva, etc. etc.) If we really thought a power law topology was desirable, there’s a plausible case to push for centralisation.
The closest I could find to a ‘rationalist survey’ was the SSC survey, which again has a pretty ‘full middle’, and not one or two groups ascendant. That said, I’d probably defer to others impressions here as I’m not really a rationalist and most of the rationalist online activity I see does originate from the bay. But even if so, this impression wouldn’t worry us if we wanted to see a power law here.
2) My understanding is there are a few generators of power law distributions. One is increasing returns to scale (e.g. cities being more attractive to live in the larger they are, ceteris paribus), another is imperfect substitution (why listen to an okay pianist when I can have a recording of the world’s best?), a third could be positive feedback loops or Matthew effects (maybe ‘getting lucky’ with a breakout single increases my chance of getting noticed again, even when controlling for musical ability versus the hitless).
There are others, but many of these generators are neutral, and some should be welcomed. If there’s increasing marginal returns to rationalist density, inward migration to central hubs seems desirable. Certain ‘jobs’ seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan. Ditto if the best people in a smaller hub migrate to contribute to a larger one (although emotions run high, I don’t think calling this ‘raiding’ is helpful—the people who migrate have agency).
[3) My hunch is what might be going on is that the ‘returns’ are sigmoid, and so are diminishing with new entrants to the Bay Area. ‘Jobs’-wise, it is not clear the Bay Area is the best place to go if you aren’t going to work on AI research, and even if so this is a skill set that is rare in absolute terms amongst rationalists). Social-wise, there’s limited interaction bandwidth, especially among higher status folks, and so the typical rationalist who goes to the bay won’t get the upside the most desirable bits of bay area social interactions—when weighed across from the transaction costs, staying put and fostering another hub might look better.]
1) Thanks for the pointer to the data, I have to agree that if the surveys are representative of EA / rationalist community, than actually there are enough medium sized hubs. When plotting it, the data seem to look reasonably power-lawy - (an argument for a greater centralization could have the form of arguing for a different exponent).
I’m unsure about what the data actually show—at least my intuitive impression is much more activity is going on in Bay area than suggested by the surveys. A possible reason may be the surveys count equally everybody above some relatively low level of engagement (willingness to fill a survey), and if we had data weighted by engagement/work effort/… it would look very different.
If the complains that hubs are “sucking in” the most active people from smaller hubs, than big differences between “population size” and “results produced” can be a consequence (effectively wasting the potential of some medium sized hubs, because some key core people left, damaging the local social structure of the hub)
2) Yes there are many effects leading to power laws (and influencing their exponents). In my opinion, rather than trying to argue from the first principles which of these effects are good and bad, it may be more useful to find comparable examples (e.g. of young research fields, or successful social movements), and compare their structures. My feel is rationality/EA/AI safety communities are getting it somewhat wrong.
Certain ‘jobs’ seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan.
This certainly seems to be the prevalent intuition in the field, based on EV guesstimates, etc., and IMO could be wrong. Or, speculation, possibly isn’t wrong _per se_, but does not take into account that people want to be in the most prestigious places and groups anyway, and already include this on an S1 level. And this model / meme pushes them away from good decisions.
Although my understanding of network science is abecedarian, I’m unsure of both whether this feature is diagnostic (i.e. divergence from power-law distributions should be a warning sign) or whether we in fact observe overdispersion even relative to a power law. The latter first.
1) ‘One or two big hubs, then lots of very small groups’ is close to what a power law distribution should look like. If anything, it’s plausible the current topology doesn’t look power-lawy enough. The EA community overlaps with the rationalist community, and it has somewhat better data on topology: If anything the hub sizes of the EA community are pretty even. This also agrees with my impression: although the bay area can be identified as the biggest EA hub, there are similar or at least middle sized hubs elsewhere (Oxford, Cambridge (UK), London, Seattle, Berlin, Geneva, etc. etc.) If we really thought a power law topology was desirable, there’s a plausible case to push for centralisation.
The closest I could find to a ‘rationalist survey’ was the SSC survey, which again has a pretty ‘full middle’, and not one or two groups ascendant. That said, I’d probably defer to others impressions here as I’m not really a rationalist and most of the rationalist online activity I see does originate from the bay. But even if so, this impression wouldn’t worry us if we wanted to see a power law here.
2) My understanding is there are a few generators of power law distributions. One is increasing returns to scale (e.g. cities being more attractive to live in the larger they are, ceteris paribus), another is imperfect substitution (why listen to an okay pianist when I can have a recording of the world’s best?), a third could be positive feedback loops or Matthew effects (maybe ‘getting lucky’ with a breakout single increases my chance of getting noticed again, even when controlling for musical ability versus the hitless).
There are others, but many of these generators are neutral, and some should be welcomed. If there’s increasing marginal returns to rationalist density, inward migration to central hubs seems desirable. Certain ‘jobs’ seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan. Ditto if the best people in a smaller hub migrate to contribute to a larger one (although emotions run high, I don’t think calling this ‘raiding’ is helpful—the people who migrate have agency).
[3) My hunch is what might be going on is that the ‘returns’ are sigmoid, and so are diminishing with new entrants to the Bay Area. ‘Jobs’-wise, it is not clear the Bay Area is the best place to go if you aren’t going to work on AI research, and even if so this is a skill set that is rare in absolute terms amongst rationalists). Social-wise, there’s limited interaction bandwidth, especially among higher status folks, and so the typical rationalist who goes to the bay won’t get the upside the most desirable bits of bay area social interactions—when weighed across from the transaction costs, staying put and fostering another hub might look better.]
(I echo Chris’s exhortation)
1) Thanks for the pointer to the data, I have to agree that if the surveys are representative of EA / rationalist community, than actually there are enough medium sized hubs. When plotting it, the data seem to look reasonably power-lawy - (an argument for a greater centralization could have the form of arguing for a different exponent).
I’m unsure about what the data actually show—at least my intuitive impression is much more activity is going on in Bay area than suggested by the surveys. A possible reason may be the surveys count equally everybody above some relatively low level of engagement (willingness to fill a survey), and if we had data weighted by engagement/work effort/… it would look very different.
If the complains that hubs are “sucking in” the most active people from smaller hubs, than big differences between “population size” and “results produced” can be a consequence (effectively wasting the potential of some medium sized hubs, because some key core people left, damaging the local social structure of the hub)
2) Yes there are many effects leading to power laws (and influencing their exponents). In my opinion, rather than trying to argue from the first principles which of these effects are good and bad, it may be more useful to find comparable examples (e.g. of young research fields, or successful social movements), and compare their structures. My feel is rationality/EA/AI safety communities are getting it somewhat wrong.
Certain ‘jobs’ seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan.
This certainly seems to be the prevalent intuition in the field, based on EV guesstimates, etc., and IMO could be wrong. Or, speculation, possibly isn’t wrong _per se_, but does not take into account that people want to be in the most prestigious places and groups anyway, and already include this on an S1 level. And this model / meme pushes them away from good decisions.