I’ve been thinking lately about the assumptions underlying different intellectual fields. This is hard to compress, and I’m working on a few posts about different aspects of these assumptions, but here’s a very rough summary of two crucial factors and how they vary across different fields:
The first is where the boundaries around agents are drawn. E.g. economics and law typically take the individual to be the basic unit of analysis. It’s possible to talk about larger or smaller units in some cases, but these aren’t as fundamental (e.g. neither would work if we thought of your tomorrow-self as a different person from your current self).
What’s hard to do using econ/law-brained thinking is to reason about larger-scale units (e.g. cultures, ethnic groups, religious groups, political factions, etc). It can talk about companies, but these are a very limited kind of group agent.
Meanwhile utilitarian and scientific thinking doesn’t really require a privileged notion of agency/identity at all. They focus on utility and truth in a way which is independent of which entities possess it.
The second factor is how much each field compresses the thing it cares about onto a single dimension (in other words, how much its subject area is commensurable). Utilitarianism and economics do this pretty straightforwardly. For politics, I’ve specified “power politics” because idealized politics aims to represent many people’s interests on many different axes. However, in practice, power politics tends to collapse to a single axis of political conflict, aimed at gaining a single resource: power. (We don’t yet know how to measure power in a way that makes it commensurable, but I think it’s probably possible.)
Meanwhile science, law, culture, (virtue) ethics, etc, all deal with multifaceted representations that can’t be scored on a single metric.
I think bayesianism goes in the same bucket as utilitarianism, and game theory goes in the same bucket as economics.
It’s extremely tempting for intellectuals to drift towards primarily using frameworks near the top left corner because that’s what’s easiest to formalize. And all else equal, formalizing things is good. What’s difficult is not letting the formalisms dominate your thinking to such an extent that you start to ignore things closer to the bottom-right corner. In other words, fields with few dimensions and few boundaries should be seen as special cases of fields with many dimensions and many boundaries, rather than vice versa.
As a comment on identity in science: certainly it’s not about ‘personal/agentic’ identity, but I have been thinking a lot about how we draw boundaries between objects—like “when” a group of 3 quarks “is” a proton. Generally this involves specifying a “scale” parameter expressing ~ how much information we’re willing to lose in abstracting away from details—then we take, symbolically, the limit where this parameter goes to infinity. Then you can use perturbation theory to bridge lower-level effects to cash out at the higher-level (eg how quark structure effects proton collision statistics)
For systems of agents, the analogue is, from the perspective of the group-level, ‘coherence’, or ‘incentive-compatibility’ from the sub-agents’ perspective. Unfortunately we don’t really have the tools to do the analogue of perturbation theory in these more complicated cases. It seems like the salient difference is that we can’t really scalarize ‘incoherence’, as there are too many saliently distinct ways for a group to be ‘incoherent’ and no natural commensurability between them.
I’ve been thinking lately about the assumptions underlying different intellectual fields. This is hard to compress, and I’m working on a few posts about different aspects of these assumptions, but here’s a very rough summary of two crucial factors and how they vary across different fields:
The first is where the boundaries around agents are drawn. E.g. economics and law typically take the individual to be the basic unit of analysis. It’s possible to talk about larger or smaller units in some cases, but these aren’t as fundamental (e.g. neither would work if we thought of your tomorrow-self as a different person from your current self).
What’s hard to do using econ/law-brained thinking is to reason about larger-scale units (e.g. cultures, ethnic groups, religious groups, political factions, etc). It can talk about companies, but these are a very limited kind of group agent.
Meanwhile utilitarian and scientific thinking doesn’t really require a privileged notion of agency/identity at all. They focus on utility and truth in a way which is independent of which entities possess it.
The second factor is how much each field compresses the thing it cares about onto a single dimension (in other words, how much its subject area is commensurable). Utilitarianism and economics do this pretty straightforwardly. For politics, I’ve specified “power politics” because idealized politics aims to represent many people’s interests on many different axes. However, in practice, power politics tends to collapse to a single axis of political conflict, aimed at gaining a single resource: power. (We don’t yet know how to measure power in a way that makes it commensurable, but I think it’s probably possible.)
Meanwhile science, law, culture, (virtue) ethics, etc, all deal with multifaceted representations that can’t be scored on a single metric.
I think bayesianism goes in the same bucket as utilitarianism, and game theory goes in the same bucket as economics.
It’s extremely tempting for intellectuals to drift towards primarily using frameworks near the top left corner because that’s what’s easiest to formalize. And all else equal, formalizing things is good. What’s difficult is not letting the formalisms dominate your thinking to such an extent that you start to ignore things closer to the bottom-right corner. In other words, fields with few dimensions and few boundaries should be seen as special cases of fields with many dimensions and many boundaries, rather than vice versa.
As a comment on identity in science: certainly it’s not about ‘personal/agentic’ identity, but I have been thinking a lot about how we draw boundaries between objects—like “when” a group of 3 quarks “is” a proton. Generally this involves specifying a “scale” parameter expressing ~ how much information we’re willing to lose in abstracting away from details—then we take, symbolically, the limit where this parameter goes to infinity. Then you can use perturbation theory to bridge lower-level effects to cash out at the higher-level (eg how quark structure effects proton collision statistics)
For systems of agents, the analogue is, from the perspective of the group-level, ‘coherence’, or ‘incentive-compatibility’ from the sub-agents’ perspective. Unfortunately we don’t really have the tools to do the analogue of perturbation theory in these more complicated cases. It seems like the salient difference is that we can’t really scalarize ‘incoherence’, as there are too many saliently distinct ways for a group to be ‘incoherent’ and no natural commensurability between them.