Multiagent Models of Mind

A typ­i­cal paradigm by which peo­ple tend to think of them­selves and oth­ers is as con­se­quen­tial­ist agents: en­tities who can be use­fully mod­eled as hav­ing be­liefs and goals, who are then act­ing ac­cord­ing to their be­liefs to achieve their goals.

This is of­ten a use­ful model, but it doesn’t quite cap­ture re­al­ity. It’s a bit of a fake frame­work. Or in com­puter sci­ence terms, you might call it a leaky ab­strac­tion.

An ab­strac­tion in the com­puter sci­ence sense is a sim­plifi­ca­tion which tries to hide the un­der­ly­ing de­tails of a thing, let­ting you think in terms of the sim­plifi­ca­tion rather than the de­tails. To the ex­tent that the ab­strac­tion ac­tu­ally suc­ceeds in hid­ing the de­tails, this makes things a lot sim­pler. But some­times the ab­strac­tion in­evitably leaks, as the sim­plifi­ca­tion fails to pre­dict some of the ac­tual be­hav­ior that emerges from the de­tails; in that situ­a­tion you need to ac­tu­ally know the un­der­ly­ing de­tails, and be able to think in terms of them.

Agent-ness be­ing a leaky ab­strac­tion is not ex­actly a novel con­cept for Less Wrong; it has been touched upon sev­eral times, such as in Scott Alexan­der’s Blue-Min­i­miz­ing Robot Se­quence. At the same time, I do not think that it has been quite fully in­ter­nal­ized yet, and that many foun­da­tional posts on LW go wrong due to be­ing premised on the as­sump­tion of hu­mans be­ing agents. In fact, I would go as far as to claim that this is the biggest flaw of the origi­nal Se­quences: they were at­tempt­ing to ex­plain many failures of ra­tio­nal­ity as be­ing due to cog­ni­tive bi­ases, when in ret­ro­spect it looks like un­der­stand­ing cog­ni­tive bi­ases doesn’t ac­tu­ally make you sub­stan­tially more effec­tive. But if you are im­plic­itly mod­el­ing hu­mans as goal-di­rected agents, then cog­ni­tive bi­ases is the most nat­u­ral place for ir­ra­tional­ity to emerge from, so it makes sense to fo­cus the most on there.

Just know­ing that an ab­strac­tion leaks isn’t enough to im­prove your think­ing, how­ever. To do bet­ter, you need to know about the ac­tual un­der­ly­ing de­tails to get a bet­ter model. In this se­quence, I will aim to elab­o­rate on var­i­ous tools for think­ing about minds which look at hu­mans in more gran­u­lar de­tail than the clas­si­cal agent model does. Hope­fully, this will help us bet­ter get past the old paradigm.

One par­tic­u­lar fam­ily of mod­els that I will be dis­cussing, will be that of multi-agent the­o­ries of mind. Here the claim is not that we would liter­ally have mul­ti­ple per­son­al­ities. Rather, my ap­proach will be similar in spirit to the one in Subagents Are Not A Me­taphor:

Here’s are the parts com­pos­ing my tech­ni­cal defi­ni­tion of an agent:
1. Values
This could be any­thing from liter­ally a util­ity func­tion to highly fram­ing-de­pen­dent. De­gen­er­ate case: em­bed­ded in lookup table from world model to ac­tions.
2. World-Model
De­gen­er­ate case: state­less world model con­sist­ing of just sense in­puts.
3. Search Process
Causal de­ci­sion the­ory is a search pro­cess. “From a fixed list of ac­tions, pick the most pos­i­tively re­in­forced” is an­other. De­gen­er­ate case: lookup table from world model to ac­tions.
Note: this says a ther­mo­stat is an agent. Not figu­ra­tively an agent. Liter­ally tech­ni­cally an agent. Fea­ture not bug.

This is a model that can be ap­plied nat­u­rally to a wide range of en­tities, as seen from the fact that ther­mostats qual­ify. And the rea­son why we tend to au­to­mat­i­cally think of peo­ple—or ther­mostats—as agents, is that our brains have evolved to nat­u­rally model things in terms of this kind of an in­ten­tional stance; it’s a way of thought that comes na­tively to us.

Given that we want to learn to think about hu­mans in a new way, we should look for ways to map the new way of think­ing into a na­tive mode of thought. One of my tac­tics will be to look for parts of the mind that look like they could liter­ally be agents (as in the above tech­ni­cal defi­ni­tion of an agent), so that we can re­place our in­tu­itive one-agent model with in­tu­itive multi-agent mod­els with­out need­ing to make trade-offs be­tween in­tu­itive­ness and truth. This will still be a leaky sim­plifi­ca­tion, but hope­fully it will be a more fine-grained leaky sim­plifi­ca­tion, so that over­all we’ll be more ac­cu­rate.

Se­quence in­tro­duc­tion: non-agent and mul­ti­a­gent mod­els of mind

Book Sum­mary: Con­scious­ness and the Brain

Build­ing up to an In­ter­nal Fam­ily Sys­tems model

Subagents, in­tro­spec­tive aware­ness, and blending

Subagents, akra­sia, and co­her­ence in humans

In­te­grat­ing dis­agree­ing subagents

Subagents, neu­ral Tur­ing ma­chines, thought se­lec­tion, and blindspots

Subagents, trauma and rationality