Towards a New Decision Theory for Parallel Agents

A recent post: Consistently Inconsistent, raises some problems with the unitary view of the mind/​brain, and presents the modular view of the mind as an alternate hypothesis. The parallel/​modular view of the brain not only deals better with the apparent hypocritical and contradictory ways our desires, behaviors, and believes seem to work, but also makes many successful empirical predictions, as well as postdictions. Much of that work can be found in Dennett’s 1991 book: “Consciousness Explained” which details both the empirical evidence against the unitary view, and the intuition-fails involved in retaining a unitary view after being presented with that evidence.

The aim of this post is not to present further evidence in favor of the parallel view, nor to hammer any more nails in the the unitary view’s coffin; the scientific and philosophical communities have done well enough in both departments to discard the intuitive hypothesis that there is some executive of the mind keeping things orderly. The dilemma I wish to raise is a question: “How should we update our decision theories to deal with independent, and sometimes inconsistent, desires and believes being had by one agent?”


If we model one agent’s desires by using one utility function, and this function orders the outcomes the agent can reach on one real axis, then it seems like we might be falling back into the intuitive view that there is some me in there with one definitive list of preferences. The picture given to us by Marvin Mimsky and Dennett involves a bunch of individually dumb agents, each with a unique set of specialized abilities and desires, interacting in such a way so as to produce one smart agent, with a diverse set of abilities and desires, but the smart agent only apears when viewed from the right level of description. For convenience, we will call those dumb-specialized agents “subagents”, and the smart-diverse agent that emerges from their interaction “the smart agent”. When one considers what it would be useful for a seeing-neural-unit to want to do, and contrasts it with what it would be useful for a get that food-neural-unit to want to do, e.g., examine that prey longer v.s. charge that prey, turn head v.s. keep running forward, stay attentive v.s. eat that food, etc. it becomes clear that cleverly managing which unit gets to have how much control, and when, is an essential part of the decision making process of the whole. Decision theory, as far as I can tell, does not model any part of that managing process; instead we treat the smart agent as having its own set of desires, and don’t discuss how the subagents’ goals are being managed to produce that global set of desires.

It is possible that the many subagents in a brain act isomorphically to an agent with one utility function and a unique problem space, when they operate in concert. A trivial example of such an agent might have only two subagents “A” and “B”, and possible outcomes O1 through On. We can plot the utilities that each subagent gives to these outcomes on a two dimensional positive Cartesian graph; A’s assigned utilities being represented by position in X, and B’s utilities by position in Y. The method by which these subagents are managed to produce behavior might just be: go for the possible outcome furthest from (0,0); in, which case, the utility function of the whole agent U(Ox) would just be the distance from (0,0) to (A’s U(Ox) , B’s U(Ox)).

An agent which manages its subagents so as to be isomorphic to one utility function on one problem space is certainly mathematically describable, but also implausible. It is unlikely that the actual physical-neural subagents in a brain deal with the same problem spaces, i.e., they each have their own unique set of O1 through On. It is not as if all the subagents are playing the same game, but each has a unique goal within that game – they each have their own unique set of legal moves too. This makes it problematic to model the global utility function of the smart agent as assigning one real number to every member of a set of possible outcomes, since there is no one set of possible outcomes for the smart agent as a whole. Each subagent has its own search space with its own format of representation for that problem space. The problem space and utility function of the smart agent are implicit in the interactions of the subagents; they emerge from the interactions of agents on a lower level; the smart agents utility function and problem space are never explicitly written down.

A useful example is smokers that are quitting. Some part of their brains that can do complicated predictions doesn’t want its body to smoke. This part of their brain wants to avoid death, i.e., will avoid death if it can, and knows that choosing the possible outcome of smoking puts its body at high risk for death. Another part of their brains wants nicotine, and knows that choosing the move of smoking gets it nicotine. The nicotine craving subagent doesn’t want to die, it also doesn’t want to stay alive, these outcomes aren’t in the domain of the nicotine-subagent’s utility function at all. The part of the brain responsible for predicting its bodies death if it continues to smoke, probably isn’t significantly rewarded by nicotine in a parallel manner. If a cigarette is around and offered to the smart agent, these subagents must compete for control of the relevant parts of their body, e.g., nicotine-subagent might set off a global craving, while predict-the-future-subagent might set off a vocal response saying “no thanks, I’m quitting.” The overall desire to smoke or not smoke of the smart agent is just the result of this competition. Similar examples can be made with different desires, like a desire to over eat and a desire to look slim, or the desire to stay seated and the desire to eat a warm meal.

We may call the algorithm which settles these internal power struggles the “managing algorithm”, and we may call a decision theory which models managing algorithms a “parallel decision theory”. It’s not the businesses of decision theorists to discover the specifics of the human managing process, that’s the business of empirical science. But certain parts of the human managing algorithm can be reasonably decided on. It is very unlikely that our managing algorithm is utilitarian for example, i.e., the smart agent doesn’t do whatever gets the highest net utility for its subagents. Some subagents are more powerful than others; they have a higher prior chance of success than their competitors; some others are weak in a parallel fashion. The question of what counts as one subagent in the brain is another empirical question which is not the business of decision theorists either, but anything that we do consider a subagent in a parallel theory must solve its problem in the form of a CSA, i.e., it must internally represent its outcomes, know what outcomes it can get to from whatever outcome it is at, and assign a utility to each outcome. There are likely many neural units that fit that description in the brain. Many of them probably contain as parts subsubagnets which also fit this description, but eventually, if you divide the parts enough, you get to neurons which are not CSAs, and thus not subagents.

If we want to understand how we make decisions, we should try to model a CSA, which is made out of more spcialized sub-CSAs competing and agreeing, which are made out of further specialized sub-sub-CSAs competing and agreeing, which are made out of, etc. which are made out of non-CSA algorithms. If we don’t understand that, we don’t understand how brains make decisions.


I hope that the considerations above are enough to convince reductionists that we should develop a parallel decision theory if we want to reduce decision making to computing. I would like to add an axiomatic parallel decision theory to the LW arsenal, but I know that that is not a one man/​woman job. So, if you think you might be of help in that endeavor, and are willing to devote yourself to some degree, please contact me at hastwoarms@gmail.com. Any team we assemble will likely not meet in person often, and will hopefully frequently meet on some private forum. We will need decision theorists, general mathematicians, people intimately familiar with the modular theory of mind, and people familiar with neural modeling. What follows are some suggestions for any team or individual that might pursue that goal independently:

  • The specifics of the managing algorithm used in brains are mostly unknown. As such, any parallel decision theory should be built to handle as diverse a range of managing algorithms as possible.

  • No composite agent should have any property that is not reducible to the interactions of the agents it is made out of. If you have a complete description of the subagents, and a complete description of the managing algorithm, you have a complete description of the smart agent.

  • There is nothing wrong with treating the lowest level of CSAs as black boxes. The specifics of the non-CSA algorithms, which the lowest level CSAs are made out of are not relevant to parallel decision theory.

  • Make sure that the theory can handle each subagent having its own unique set of possible outcomes, and its own unique method of representing those outcomes.

  • Make sure that each CSA above the lowest level actually has “could”, “should”, and “would” labels on the nodes in its problem space, and make sure that those labels, their values, and the problem space itself can be reduced to the managing of the CSAs on the level below.

  • Each level above the lowest should have CSAs dealing with more a more diverse range of problems than the ones on the level bellow. The lowest level should have the most specialized CSAs.

  • If you’ve achieved the six goals above, try comparing your parallel decision theory to other decision theories; see how much predictive accuracy is gained by using a parallel decision theory instead of the classical theories.