Coalitional agency seems like an unnecessary constraint on design of a composite agent, since an individual agent could just (choose to) listen to other agents and behave the way their coalition would endorse, thereby effectively becoming a composite agent, without being composite “by construction”. The step where an agent chooses which other (hypothetical) agents to listen to makes constraints on the nature of agents unnecessary, because the choice to listen to some agents and not others can impose any constraints that particular agent cares about, and so an “agent” could be as vague as a “computation” or a program.
(Choosing to listen to a computation means choosing a computation based on considerations other than its output, committing to use its output in a particular way without yet knowing what it’s going to be, and carrying out that commitment once the output becomes available, regardless of what it turns out to be.)
This way we can get back to individual rationality, figuring out how an agent should choose to listen to which other agents/computations when coming up with its own beliefs and decisions. But actually occasionally listening to those other computations is the missing step in most decision theories, which would take care of interaction with other agents (both actual and hypothetical).
Hmm, this feels analogous to saying “companies are an unnecessary abstraction in economic theory, since individuals could each make separate contracts about how they’ll interact with each other. Therefore we can reduce economics to studying isolated individuals”.
But companies are in fact a very useful unit of analysis. For example, instead of talking about the separate ways in which each person in the company has committed to treating each other person in the company, you can talk about the HR policy which governs all interactions within the company. You might then see emergent effects (like political battles over what the HR policies are) which are very hard to reason about when taking a single-agent view.
Similarly, although in principle you could have any kind of graph of which agents listen to which other agents, in practice I expect that realistic agents will tend to consist of clusters of agents which all “listen to” each other in some ways. This is both because clustering is efficient (hence animals having bodies made up of clusters of cells; companies being made of clusters of individuals; etc) and because when you even define what counts as a single agent, you’re doing a kind of clustering. That is, I think that the first step of talking about “individual rationality” is implicitly defining which coalitions qualify as individuals.
My point is more that you wouldn’t want to define individuals as companies, or to say that only companies but not individuals can have agency. And that the things you are trying to get out of coalitional agency could already be there within individual agency.
A notion of choosing to listen to computations (or of analyzing things that are not necessarily agents in terms of which computations influence them) keeps coming up in my own investigations as a way of formulating coordination, decision making under logical uncertainty, and learning/induction. It incidentally seems useful for expressing coalitions, or as a way of putting market-like things inside agents. I dunno, it’s not sufficiently fleshed out, so I don’t really have a legible argument here, mostly an intuition that a notion of listening to computations would be more flexible in this context.
Coalitional agency seems like an unnecessary constraint on design of a composite agent, since an individual agent could just (choose to) listen to other agents and behave the way their coalition would endorse, thereby effectively becoming a composite agent, without being composite “by construction”. The step where an agent chooses which other (hypothetical) agents to listen to makes constraints on the nature of agents unnecessary, because the choice to listen to some agents and not others can impose any constraints that particular agent cares about, and so an “agent” could be as vague as a “computation” or a program.
(Choosing to listen to a computation means choosing a computation based on considerations other than its output, committing to use its output in a particular way without yet knowing what it’s going to be, and carrying out that commitment once the output becomes available, regardless of what it turns out to be.)
This way we can get back to individual rationality, figuring out how an agent should choose to listen to which other agents/computations when coming up with its own beliefs and decisions. But actually occasionally listening to those other computations is the missing step in most decision theories, which would take care of interaction with other agents (both actual and hypothetical).
Hmm, this feels analogous to saying “companies are an unnecessary abstraction in economic theory, since individuals could each make separate contracts about how they’ll interact with each other. Therefore we can reduce economics to studying isolated individuals”.
But companies are in fact a very useful unit of analysis. For example, instead of talking about the separate ways in which each person in the company has committed to treating each other person in the company, you can talk about the HR policy which governs all interactions within the company. You might then see emergent effects (like political battles over what the HR policies are) which are very hard to reason about when taking a single-agent view.
Similarly, although in principle you could have any kind of graph of which agents listen to which other agents, in practice I expect that realistic agents will tend to consist of clusters of agents which all “listen to” each other in some ways. This is both because clustering is efficient (hence animals having bodies made up of clusters of cells; companies being made of clusters of individuals; etc) and because when you even define what counts as a single agent, you’re doing a kind of clustering. That is, I think that the first step of talking about “individual rationality” is implicitly defining which coalitions qualify as individuals.
My point is more that you wouldn’t want to define individuals as companies, or to say that only companies but not individuals can have agency. And that the things you are trying to get out of coalitional agency could already be there within individual agency.
A notion of choosing to listen to computations (or of analyzing things that are not necessarily agents in terms of which computations influence them) keeps coming up in my own investigations as a way of formulating coordination, decision making under logical uncertainty, and learning/induction. It incidentally seems useful for expressing coalitions, or as a way of putting market-like things inside agents. I dunno, it’s not sufficiently fleshed out, so I don’t really have a legible argument here, mostly an intuition that a notion of listening to computations would be more flexible in this context.