It’s been said before for sure, but worth saying periodically.
Something I’d add, which particularly seems like the failure mode I see in EA-spheres (less in rationalist spheres but they blur together)
Try to do something other than solve coordination problems.
Or, try to do something that provides immediate value to whoever uses it, regardless of whether other people are also using it.
A failure mode I see (and have often fallen to) is looking around and thinking “hmm, I don’t know how to do something technical, and/or I don’t have the specialist skills necessary to do something specialist. But, I can clearly see problems that stem from people being uncoordinated. I think I roughly know how people work, and I think I can understand this problem, so I will work on that.”
But:
It actually requires just as much complex specialist knowledge to solve coordination problems as it does to do [whatever other thing you were considering].
Every time someone attempts to rally people around a new solution, and fails, they make it harder for the next person who tries to rally people around a new solution. This makes the coordination system overall worse.
This is a fairly different framing than Benquo’s (and Eliezer’s) advice, although I think it amounts to something similar.
At some point I’ll get around to writing a proper post on this topic, but a few brief bullet points:
Coordination problems seem to be the primary bottleneck to economic progress across the large majority of companies and industries today.
One class of evidence in support: go down Forbe’s list of billionaires, and practically all of them (other than the heirs) made their fortune by spending their day-to-day work solving coordination problems (e.g. founding and managing a business). At a higher level of abstraction, most of the successful internet companies make their money by solving coordination problems—Uber, Lyft, Facebook, Amazon and Google are obvious examples.
Flip side of that coin: solving coordination problems yields massive rewards, so generalized efficient markets principle suggests that it must be really hard to solve coordination problems consistently and at scale.
I think the main take-away is not “try to do something other than solve coordination problems”, but rather “coordination problems are really difficult in general, like beating-the-stock-market level of difficult”. They’re a big-game kind of problem, with potentially huge rewards, but you need to go into it with the same mindset as beating the market: you need to either find a highly specialized niche, or be the very best in the world at some relevant skill, and either way you also need to be fully competent at all the other relevant skills. If it looks like there’s some easy low-hanging fruit to pick, you’re probably missing something, unless there’s a really good reason why nobody else in the world could have noticed that particular fruit.
There are lots of ways for people to improve their own life and those of friends without this being massively massively profitable, though. Like, it seems like you’re conflating the coordination required to, say, start a discussion group, with the coordination required to run a tech empire. (I have talked to someone in the rationalist community recently who believes that starting a club is hard because of the social dynamics involved, including expected social discouragement for excluding people).
You can’t justifiably reason from “doing this at world-class competence is hard” to “you can’t get large gains by being moderately good at this instead of not trying at all”.
[EDIT: note that I’m including things like “having more illuminating intellectual discussions”, “being less afraid to communicate”, and “doing less bullshit work” in “improving one’s own life”, so these feed into other goals, not just personal ones; put on your own oxygen mask first, and all that]
Totally agree. In particular, I do think that solving small-scale coordination problems is one of the main ways that individuals can have high positive impact on their company/community, relative to effort expended. (I like to use an example from an online car dealership where I used to work: the salespeople had no idea what cars were listed or at what price, which caused a lot of friction when someone called in about a car. Our product manager eventually solved this with five minutes of effort: he asked our marketing guy to forward his daily car-ad spreadsheet to the sales team.)
That said, generalized efficient markets principle doesn’t go completely out the window the moment we zoom in from the whole-world-level. The bigger and more obvious the gain from some coordination problem, the more people have probably tried to solve it already, and the harder it’s likely to be. All the usual considerations of generalized efficiency still apply.
This still leaves the question of why coordination problems have unusually high returns (at the world-scale). Are there few people who are actually good at it? Is it a matter of value capture rather than value creation? Are people just bad at realizing coordination problems need to be solved? Different theories about the large-scale potentially have different predictions about the difficulty & reward of small-scale coordination problems.
Value capture. There are lots of valuable coordination things and valuable non-coordination things, but coordination things lead to network effects and natural monopolies that allow more efficient value capture. If you can become a coordination bottleneck you can often capture more than all of the value.
Also because those who can coordinate use that and other political skills to capture more of the value from people doing other more rationality-compatible useful things.
That was exactly what the little Zvi voice in the back of my head said. I’m not yet convinced. The “network effects → natural monopoly” argument is a strong one, but it still seems like coordination problems are the main economic bottleneck even when there isn’t value capture involved, especially in smaller-scale situations.
Some examples:
Academics who specialize in bridging between fields or sub-disciplines, e.g. biophysicists, mathematical chemists, synthetic biologists (usually from an engineering background), mathematicians who translate one sub-field’s jargon into another, etc.
Cross-department coordination within companies, e.g. car-ad spreadsheet example. People who work across specialized departments seem to have unusually high value relative to effort exerted.
There’s a book on tackling large coordination problems in government—they call them “wicked” problems. The opening chapter is the only interesting one. It’s written by Mike McConnell, the guy tasked with fixing up US intelligence after 9/11. Various agencies had all the pieces to stop the attacks, but multiple cross-agency coordination failures prevented them from acting in time.
McConnell also tells the story of the Goldwater-Nichols Act. After the invasion of Grenada, the complete coordination failure of the military was apparent. Each half of the island was controlled by a different branch, and in order to talk to each other, officers had to walk to the nearest payphone and get routed through one of the opposite branch offices on the US mainland, because their radios were incompatible. The Goldwater-Nichols Act reorganized things to fix this. It passed despite unanimous opposition by the service chiefs, it worked, and ten years later every single service chief testified before congress that it was the best thing to ever happen to the US military.
In all of these cases, there’s no clear natural monopoly and no obviously outsized value capture relative to value created. Rather, the “potential energy” is created by language barriers, intra-organization political coalitions, information silos, and communities with limited cross-talk.
That’s not to say value capture isn’t relevant to e.g. Google or Facebook. Obviously it is. But Google (and more debatably Facebook) still creates huge amounts of real value, regardless of how much it captures, and it does so with little “effort”—Google’s employee base is tiny relative to value created, and most of those employees don’t even work on search.
There is an argument to be made that I’m really talking about two qualitatively different cases here: coordination problems which involve breaking down cross-silo barriers, and coordination problems which involve building new markets. Maybe both of these are interesting on their own, but generalizing to all coordination problems goes too far? On the other hand, there are outside-view reasons to expect that coordination problems in general should get worse as the world modernizes—see From Personal to Prison Gangs.
I would also say that when coordination problems exist, it is often easy to see that they exist, so they look like a bottleneck. Whereas if other types of advances could improve things, it is often much harder to notice that a piece of technology is missing.
Specifically, they are not the sort of thing you should be practicing on if you haven’t yet accomplished much (to the point that “go out and DO something” is the most useful advice to be following)
Agreed, though with the caveat that losing some money in the stock market is an important early step in gaining experience—presumably it’s the same with coordination problems. But that sort of practice should be undertaken with the understanding that it’s likely to fail on an object-level, and you want that learning experience to be cheap—e.g. don’t make it harder for the next person to solve the coordination problem.
In particular, I wouldn’t want to discourage people from building coordination skills by having a minimum level of status required to even try. Rather, we ideally want ways to experiment that aren’t too damaging if they fail. (And, of course, we want to have realistic expectations about chance of success—ideally people go into a learning experience fully aware that it’s a learning experience, and don’t bet their house on day-trading.)
Additional very important reason to avoid working on coordination problems is Benquo’s reason. If you are attempting a coordination game, even if you have an important technical innovation, you’re going to spend the bulk of your time playing politics and social games, and getting feedback from political/social world. So by default you won’t be training your rationality, instead you’ll be training something that opposes rationality.
It’s something we need. And at some point we need people with both skill sets. But you need to become stronger first.
It is often (usually?) much more difficult to correctly identify coordination problems (due to the lurking danger of unknown unknowns, un-perceived strategic/game-theoretic considerations, insufficient domain knowledge, etc.) than it is to correctly identify simpler (or “object-level” or “technical” or “immediate” etc.) problems.
When attempting to solve such “non-coordination-problems”, it is often easy to get immediate, clear feedback on your attempted solution; whereas, when attempting to solve coordination problems, clear feedback on your attempted solution is hard to come by, may be obscured by a variety of factors, and may come in with a great delay (which itself is an obscuring factor).
(These two problems, of course, leads to the sort of situation described by this Russian saying: “It is very difficult to find a black cat in a dark room—especially if the cat is not there.” In the most pernicious such cases, you may end up contributing to the very problem you were trying to solve—while all the while thinking that your efforts are absolutely critical to preventing things from getting far worse!)
It’s been said before for sure, but worth saying periodically.
Something I’d add, which particularly seems like the failure mode I see in EA-spheres (less in rationalist spheres but they blur together)
Try to do something other than solve coordination problems.
Or, try to do something that provides immediate value to whoever uses it, regardless of whether other people are also using it.
A failure mode I see (and have often fallen to) is looking around and thinking “hmm, I don’t know how to do something technical, and/or I don’t have the specialist skills necessary to do something specialist. But, I can clearly see problems that stem from people being uncoordinated. I think I roughly know how people work, and I think I can understand this problem, so I will work on that.”
But:
It actually requires just as much complex specialist knowledge to solve coordination problems as it does to do [whatever other thing you were considering].
Every time someone attempts to rally people around a new solution, and fails, they make it harder for the next person who tries to rally people around a new solution. This makes the coordination system overall worse.
This is a fairly different framing than Benquo’s (and Eliezer’s) advice, although I think it amounts to something similar.
At some point I’ll get around to writing a proper post on this topic, but a few brief bullet points:
Coordination problems seem to be the primary bottleneck to economic progress across the large majority of companies and industries today.
One class of evidence in support: go down Forbe’s list of billionaires, and practically all of them (other than the heirs) made their fortune by spending their day-to-day work solving coordination problems (e.g. founding and managing a business). At a higher level of abstraction, most of the successful internet companies make their money by solving coordination problems—Uber, Lyft, Facebook, Amazon and Google are obvious examples.
Flip side of that coin: solving coordination problems yields massive rewards, so generalized efficient markets principle suggests that it must be really hard to solve coordination problems consistently and at scale.
I think the main take-away is not “try to do something other than solve coordination problems”, but rather “coordination problems are really difficult in general, like beating-the-stock-market level of difficult”. They’re a big-game kind of problem, with potentially huge rewards, but you need to go into it with the same mindset as beating the market: you need to either find a highly specialized niche, or be the very best in the world at some relevant skill, and either way you also need to be fully competent at all the other relevant skills. If it looks like there’s some easy low-hanging fruit to pick, you’re probably missing something, unless there’s a really good reason why nobody else in the world could have noticed that particular fruit.
There are lots of ways for people to improve their own life and those of friends without this being massively massively profitable, though. Like, it seems like you’re conflating the coordination required to, say, start a discussion group, with the coordination required to run a tech empire. (I have talked to someone in the rationalist community recently who believes that starting a club is hard because of the social dynamics involved, including expected social discouragement for excluding people).
You can’t justifiably reason from “doing this at world-class competence is hard” to “you can’t get large gains by being moderately good at this instead of not trying at all”.
[EDIT: note that I’m including things like “having more illuminating intellectual discussions”, “being less afraid to communicate”, and “doing less bullshit work” in “improving one’s own life”, so these feed into other goals, not just personal ones; put on your own oxygen mask first, and all that]
Totally agree. In particular, I do think that solving small-scale coordination problems is one of the main ways that individuals can have high positive impact on their company/community, relative to effort expended. (I like to use an example from an online car dealership where I used to work: the salespeople had no idea what cars were listed or at what price, which caused a lot of friction when someone called in about a car. Our product manager eventually solved this with five minutes of effort: he asked our marketing guy to forward his daily car-ad spreadsheet to the sales team.)
That said, generalized efficient markets principle doesn’t go completely out the window the moment we zoom in from the whole-world-level. The bigger and more obvious the gain from some coordination problem, the more people have probably tried to solve it already, and the harder it’s likely to be. All the usual considerations of generalized efficiency still apply.
This still leaves the question of why coordination problems have unusually high returns (at the world-scale). Are there few people who are actually good at it? Is it a matter of value capture rather than value creation? Are people just bad at realizing coordination problems need to be solved? Different theories about the large-scale potentially have different predictions about the difficulty & reward of small-scale coordination problems.
Value capture. There are lots of valuable coordination things and valuable non-coordination things, but coordination things lead to network effects and natural monopolies that allow more efficient value capture. If you can become a coordination bottleneck you can often capture more than all of the value.
Also because those who can coordinate use that and other political skills to capture more of the value from people doing other more rationality-compatible useful things.
That was exactly what the little Zvi voice in the back of my head said. I’m not yet convinced. The “network effects → natural monopoly” argument is a strong one, but it still seems like coordination problems are the main economic bottleneck even when there isn’t value capture involved, especially in smaller-scale situations.
Some examples:
Academics who specialize in bridging between fields or sub-disciplines, e.g. biophysicists, mathematical chemists, synthetic biologists (usually from an engineering background), mathematicians who translate one sub-field’s jargon into another, etc.
Cross-department coordination within companies, e.g. car-ad spreadsheet example. People who work across specialized departments seem to have unusually high value relative to effort exerted.
There’s a book on tackling large coordination problems in government—they call them “wicked” problems. The opening chapter is the only interesting one. It’s written by Mike McConnell, the guy tasked with fixing up US intelligence after 9/11. Various agencies had all the pieces to stop the attacks, but multiple cross-agency coordination failures prevented them from acting in time.
McConnell also tells the story of the Goldwater-Nichols Act. After the invasion of Grenada, the complete coordination failure of the military was apparent. Each half of the island was controlled by a different branch, and in order to talk to each other, officers had to walk to the nearest payphone and get routed through one of the opposite branch offices on the US mainland, because their radios were incompatible. The Goldwater-Nichols Act reorganized things to fix this. It passed despite unanimous opposition by the service chiefs, it worked, and ten years later every single service chief testified before congress that it was the best thing to ever happen to the US military.
In all of these cases, there’s no clear natural monopoly and no obviously outsized value capture relative to value created. Rather, the “potential energy” is created by language barriers, intra-organization political coalitions, information silos, and communities with limited cross-talk.
That’s not to say value capture isn’t relevant to e.g. Google or Facebook. Obviously it is. But Google (and more debatably Facebook) still creates huge amounts of real value, regardless of how much it captures, and it does so with little “effort”—Google’s employee base is tiny relative to value created, and most of those employees don’t even work on search.
There is an argument to be made that I’m really talking about two qualitatively different cases here: coordination problems which involve breaking down cross-silo barriers, and coordination problems which involve building new markets. Maybe both of these are interesting on their own, but generalizing to all coordination problems goes too far? On the other hand, there are outside-view reasons to expect that coordination problems in general should get worse as the world modernizes—see From Personal to Prison Gangs.
I would also say that when coordination problems exist, it is often easy to see that they exist, so they look like a bottleneck. Whereas if other types of advances could improve things, it is often much harder to notice that a piece of technology is missing.
Specifically, they are not the sort of thing you should be practicing on if you haven’t yet accomplished much (to the point that “go out and DO something” is the most useful advice to be following)
Agreed, though with the caveat that losing some money in the stock market is an important early step in gaining experience—presumably it’s the same with coordination problems. But that sort of practice should be undertaken with the understanding that it’s likely to fail on an object-level, and you want that learning experience to be cheap—e.g. don’t make it harder for the next person to solve the coordination problem.
In particular, I wouldn’t want to discourage people from building coordination skills by having a minimum level of status required to even try. Rather, we ideally want ways to experiment that aren’t too damaging if they fail. (And, of course, we want to have realistic expectations about chance of success—ideally people go into a learning experience fully aware that it’s a learning experience, and don’t bet their house on day-trading.)
Additional very important reason to avoid working on coordination problems is Benquo’s reason. If you are attempting a coordination game, even if you have an important technical innovation, you’re going to spend the bulk of your time playing politics and social games, and getting feedback from political/social world. So by default you won’t be training your rationality, instead you’ll be training something that opposes rationality.
It’s something we need. And at some point we need people with both skill sets. But you need to become stronger first.
This is an excellent point.
To the list of “but”s, I would add:
It is often (usually?) much more difficult to correctly identify coordination problems (due to the lurking danger of unknown unknowns, un-perceived strategic/game-theoretic considerations, insufficient domain knowledge, etc.) than it is to correctly identify simpler (or “object-level” or “technical” or “immediate” etc.) problems.
When attempting to solve such “non-coordination-problems”, it is often easy to get immediate, clear feedback on your attempted solution; whereas, when attempting to solve coordination problems, clear feedback on your attempted solution is hard to come by, may be obscured by a variety of factors, and may come in with a great delay (which itself is an obscuring factor).
(These two problems, of course, leads to the sort of situation described by this Russian saying: “It is very difficult to find a black cat in a dark room—especially if the cat is not there.” In the most pernicious such cases, you may end up contributing to the very problem you were trying to solve—while all the while thinking that your efforts are absolutely critical to preventing things from getting far worse!)