There is an equivocation going on in the post that bothers me. Mot is at first the deity of lack of technology, where “technology” is characterized with the usual examples of hardware (wheels, skyscrapers, phones) and wetware (vaccines, pesticides). Call this, for lack of a better term, “hard technology”. Later however, “technology” is broadened to include what I’ll call “social technologies” – LLCs, constitutions, markets etc. One could also put in here voting systems (not voting machines, but e.g. first-past-the-post vs approval), PR campaigns, myths. Social technologies are those that coordinate our behaviour, for ill or good. They can be designed to avoid coordination failures (equilibria no individual wants), or to maximize profits, to maintain an equilibria only a minority wants etc. (The distinction between social and hard tech obviously has blurry boundaries – you’ll notice I didn’t mention IT because much of it is on the border between the two. But somewhat blurry boundaries doesn’t automatically threaten a distinction).
Broadening the definition is fine, but then stick to it. You swing between the two when, e.g. you claim that:
boosting technological progress is far more actionable than increasing coordination
This claim only makes sense if “technology” here excludes social tech. (By the way I’d love to see the numbers on this. I’m pretty skeptical. I’d be convinced of this claim when I see us allocate to e.g. voting reform, the kind of capital we’re allocating to e.g. nuclear fusion. But of course capital allocation processes are… coordination processes. More on this next.)
I fully agree that coordination failures can be thought of as a type of technological failure (solving them requires all the technical prowess that other “hard” disciplines require). But they’re a pretty distinct class of failure, and a distinctly important one for this reason: coordination failures tend to be upstream of other technological failures. What technology we build depends on how we are coordinate. If we coordinate “well” we get the technologies (hard or social) we all want (+when/what order we want them), and none of the ones we don’t want (on some account of what “we” refers to – BIG questions of justice/fairness here, set aside for another day).
I’m also bothered by some normative-descriptive ambiguity in the use of some terms. For example, technological progress is treated as some uni-directional thing here. This is plausibly (though not obviously) true if “progress” here is used normatively. It’s definitely false if “progress” here is synonymous with something purely descriptive like “development” or better yet, “change.” If technological development were uni-directional, you’d be hard pressed to account for good and bad technological developments. For such normative judgments to make any sense, there arguably had to be alternative developments we could have pursued, or at least developments we could have chosen not to pursue. See Acemoglu and Johnson for better understanding directions of technological development. Their work also provides examples of the additional predictive power gained by including the direction of technological development into one’s models. Wheels? Net good. Engagement-maximizing social media? Arguably net bad. Here is an alternative framing (easier for libertarian-leaning folks to swallow): regulation is a coordination technology. Effective regulation directs technological development toward the outcomes the regulator wishes for; good regulation directs development toward good outcomes. The two are obviously not always the same. The social engineer (e.g. mechanism designer) tackles the question of how to create effective regulation; the political philosopher tackles what kind of development should be pursued. (And while the political philosopher is working things out, democracy has been determined as a decent placeholder director).
Another normative-descriptive ambiguity that bothered me: the use of “coordination.” In “coordination failure” “coordination” is at least weakly normative: the phrase describes a situation in which agents failed to behave collectively in the manner that would have maximized welfare – they failed to behave the way they should have in some sense. They didn’t behave ideally. “Coordination” has a purely descriptive use too though, as meaning something like “patterns in the collective behaviour of agents.” I’ll italicize the normative use. An instance of coordination failure can also be an instance of coordination. For example, in Scott Alexander’s fishermen story (example 3 in Meditations on Moloch), the agents’ actions are coordinated by a market that failed to internalize the cost of pollution, and this results in a coordination failure. When you say that:
you could see Vegas as a product of the miraculous coordination technology that is modern capitalism—perhaps an edge case of it, but still an example of its brilliance.
I think there is an equivocation between coordination and coordination going on. Vegas is absolutely a fascinating example of capitalism’s coordinating power, much like the peacock’s feather is a fascinating example of sexual selection’s coordinating power. But are either of these successful coordination? Much harder to say. (Not sure how to even begin answering the normative question in the peacock case). EDIT: Punchier example: the Stanford Prison Experiment is another fascinating example of role-playing’s power to coordinate behaviour, but it sure doesn’t seem like an example of successful coordination.
There is an equivocation going on in the post that bothers me. Mot is at first the deity of lack of technology, where “technology” is characterized with the usual examples of hardware (wheels, skyscrapers, phones) and wetware (vaccines, pesticides). Call this, for lack of a better term, “hard technology”. Later however, “technology” is broadened to include what I’ll call “social technologies” – LLCs, constitutions, markets etc. One could also put in here voting systems (not voting machines, but e.g. first-past-the-post vs approval), PR campaigns, myths. Social technologies are those that coordinate our behaviour, for ill or good. They can be designed to avoid coordination failures (equilibria no individual wants), or to maximize profits, to maintain an equilibria only a minority wants etc. (The distinction between social and hard tech obviously has blurry boundaries – you’ll notice I didn’t mention IT because much of it is on the border between the two. But somewhat blurry boundaries doesn’t automatically threaten a distinction).
Broadening the definition is fine, but then stick to it. You swing between the two when, e.g. you claim that:
This claim only makes sense if “technology” here excludes social tech. (By the way I’d love to see the numbers on this. I’m pretty skeptical. I’d be convinced of this claim when I see us allocate to e.g. voting reform, the kind of capital we’re allocating to e.g. nuclear fusion. But of course capital allocation processes are… coordination processes. More on this next.)
I fully agree that coordination failures can be thought of as a type of technological failure (solving them requires all the technical prowess that other “hard” disciplines require). But they’re a pretty distinct class of failure, and a distinctly important one for this reason: coordination failures tend to be upstream of other technological failures. What technology we build depends on how we are coordinate. If we coordinate “well” we get the technologies (hard or social) we all want (+when/what order we want them), and none of the ones we don’t want (on some account of what “we” refers to – BIG questions of justice/fairness here, set aside for another day).
I’m also bothered by some normative-descriptive ambiguity in the use of some terms. For example, technological progress is treated as some uni-directional thing here. This is plausibly (though not obviously) true if “progress” here is used normatively. It’s definitely false if “progress” here is synonymous with something purely descriptive like “development” or better yet, “change.” If technological development were uni-directional, you’d be hard pressed to account for good and bad technological developments. For such normative judgments to make any sense, there arguably had to be alternative developments we could have pursued, or at least developments we could have chosen not to pursue. See Acemoglu and Johnson for better understanding directions of technological development. Their work also provides examples of the additional predictive power gained by including the direction of technological development into one’s models. Wheels? Net good. Engagement-maximizing social media? Arguably net bad. Here is an alternative framing (easier for libertarian-leaning folks to swallow): regulation is a coordination technology. Effective regulation directs technological development toward the outcomes the regulator wishes for; good regulation directs development toward good outcomes. The two are obviously not always the same. The social engineer (e.g. mechanism designer) tackles the question of how to create effective regulation; the political philosopher tackles what kind of development should be pursued. (And while the political philosopher is working things out, democracy has been determined as a decent placeholder director).
Another normative-descriptive ambiguity that bothered me: the use of “coordination.” In “coordination failure” “coordination” is at least weakly normative: the phrase describes a situation in which agents failed to behave collectively in the manner that would have maximized welfare – they failed to behave the way they should have in some sense. They didn’t behave ideally. “Coordination” has a purely descriptive use too though, as meaning something like “patterns in the collective behaviour of agents.” I’ll italicize the normative use. An instance of coordination failure can also be an instance of coordination. For example, in Scott Alexander’s fishermen story (example 3 in Meditations on Moloch), the agents’ actions are coordinated by a market that failed to internalize the cost of pollution, and this results in a coordination failure. When you say that:
I think there is an equivocation between coordination and coordination going on. Vegas is absolutely a fascinating example of capitalism’s coordinating power, much like the peacock’s feather is a fascinating example of sexual selection’s coordinating power. But are either of these successful coordination? Much harder to say. (Not sure how to even begin answering the normative question in the peacock case). EDIT: Punchier example: the Stanford Prison Experiment is another fascinating example of role-playing’s power to coordinate behaviour, but it sure doesn’t seem like an example of successful coordination.