Why Subagents?

The jus­tifi­ca­tion for mod­el­ling real-world sys­tems as “agents”—i.e. choos­ing ac­tions to max­i­mize some util­ity func­tion—usu­ally rests on var­i­ous co­her­ence the­o­rems. They say things like “ei­ther the sys­tem’s be­hav­ior max­i­mizes some util­ity func­tion, or it is throw­ing away re­sources” or “ei­ther the sys­tem’s be­hav­ior max­i­mizes some util­ity func­tion, or it can be ex­ploited” or things like that. Differ­ent the­o­rems use slightly differ­ent as­sump­tions and prove slightly differ­ent things, e.g. de­ter­minis­tic vs prob­a­bil­is­tic util­ity func­tion, unique vs non-unique util­ity func­tion, whether the agent can ig­nore a pos­si­ble ac­tion, etc.

One theme in these the­o­rems is how they han­dle “in­com­plete prefer­ences”: situ­a­tions where an agent does not pre­fer one world-state over an­other. For in­stance, imag­ine an agent which prefers pep­per­oni over mush­room pizza when it has pep­per­oni, but mush­room over pep­per­oni when it has mush­room; it’s sim­ply never will­ing to trade in ei­ther di­rec­tion. There’s noth­ing in­her­ently “wrong” with this; the agent is not nec­es­sar­ily ex­e­cut­ing a dom­i­nated strat­egy, can­not nec­es­sar­ily be ex­ploited, or any of the other bad things we as­so­ci­ate with in­con­sis­tent prefer­ences. But the prefer­ences can’t be de­scribed by a util­ity func­tion over pizza top­pings.

In this post, we’ll see that these kinds of prefer­ences are very nat­u­rally de­scribed us­ing sub­agents. In par­tic­u­lar, when prefer­ences are al­lowed to be path-de­pen­dent, sub­agents are im­por­tant for rep­re­sent­ing con­sis­tent prefer­ences. This gives a the­o­ret­i­cal ground­ing for multi-agent mod­els of hu­man cog­ni­tion.

Prefer­ence Rep­re­sen­ta­tion and Weak Utility

Let’s ex­pand our pizza ex­am­ple. We’ll con­sider an agent who:

  • Prefers pep­per­oni, mush­room, or both over plain cheese pizza

  • Prefers both over pep­per­oni or mush­room alone

  • Does not have a sta­ble prefer­ence be­tween mush­room and pep­per­oni—they pre­fer whichever they cur­rently have

We can rep­re­sent this us­ing a di­rected graph:

The ar­rows show prefer­ence: our agent prefers A to B if (and only if) there is a di­rected path from A to B along the ar­rows. There is no path from pep­per­oni to mush­room or from mush­room to pep­per­oni, so the agent has no prefer­ence be­tween them. In this case, we’re in­ter­pret­ing “no prefer­ence” as “agent prefers to keep what­ever they have already”. Note that this is NOT the same as “the agent is in­differ­ent”, in which case the agent is will­ing to switch back and forth be­tween the two op­tions as long as the switch doesn’t cost any­thing.

Key point: there is no cy­cle in this graph. If the agent’s prefer­ences are cyclic, that’s when they prov­ably throw away re­sources, pay­ing to go in cir­cles. As long as the prefer­ences are acyclic, we call them “con­sis­tent”.

Now, at this point we can still define a “weak” util­ity func­tion by ig­nor­ing the “miss­ing” prefer­ence be­tween pep­per­oni and mush­room. Here’s the idea: a nor­mal util­ity func­tion says “the agent always prefers the op­tion with higher util­ity”. A weak util­ity func­tion says: “if the agent has a prefer­ence, then they always pre­fer the op­tion with higher util­ity”. The miss­ing prefer­ence means we can’t build a nor­mal util­ity func­tion, but we can still build a weak util­ity func­tion. Here’s how: since our graph has no cy­cles, we can always or­der the nodes so that the ar­rows only go for­ward along the sorted nodes—a tech­nique called topolog­i­cal sort­ing. Each node’s po­si­tion in the topolog­i­cal sort or­der is its util­ity. A small tweak to this method also han­dles in­differ­ence.

(Note: I’m us­ing the term “weak util­ity” here be­cause it seems nat­u­ral; I don’t know of any stan­dard term for this in the liter­a­ture. Most peo­ple don’t dis­t­in­guish be­tween these two in­ter­pre­ta­tions of util­ity.)

When prefer­ences are in­com­plete, there are mul­ti­ple pos­si­ble weak util­ity func­tions. For in­stance, in our ex­am­ple, the topolog­i­cal sort or­der shown above gives pep­per­oni util­ity 1 and mush­room util­ity 2. But we could just as eas­ily swap them!

Prefer­ence By Committee

The prob­lem with the weak util­ity ap­proach is that it treats the prefer­ence be­tween pep­per­oni and mush­room as un­known—de­pend­ing on which pos­si­ble util­ity we pick, it could go ei­ther way. It’s pre­tend­ing that there’s some hid­den prefer­ence there which we sim­ply don’t know. But there are real sys­tems where the prefer­ence is not merely un­known, but a real prefer­ence to stay in the cur­rent state.

For ex­am­ple, maybe our pizza-agent is ac­tu­ally a com­mit­tee which must unan­i­mously agree to any pro­posed change. One mem­ber prefers pep­per­oni to no pep­per­oni, re­gard­less of mush­rooms; the other prefers mush­rooms to no mush­rooms, re­gard­less of pep­per­oni. This com­mit­tee is not ex­ploitable and does not throw away re­sources, nor does it have any hid­den prefer­ence be­tween pep­per­oni and mush­rooms. Viewed as a black box, its “true” prefer­ence be­tween pep­per­oni and mush­rooms is to keep whichever it cur­rently has.

In fact, it turns out that we can rep­re­sent any con­sis­tent prefer­ences by a com­mit­tee re­quiring unan­i­mous agree­ment.

The key idea here is called or­der di­men­sion. We want to take our di­rected acyclic graph of prefer­ences, and stick it into a mul­ti­di­men­sional space so that there is an ar­row from A to B if-and-only-if B is higher along all di­men­sions. Each di­men­sion rep­re­sents the util­ity of one sub­agent on the com­mit­tee; that sub­agent ap­proves a change only if the change does not de­crease the sub­agent’s util­ity. In or­der for the whole com­mit­tee to ap­prove a change, the trade must in­crease (or leave un­changed) the util­ities of all sub­agents. The min­i­mum num­ber of agents re­quired to make this work—the min­i­mum num­ber of di­men­sions re­quired—is the or­der di­men­sion of the graph.

For in­stance, our pizza ex­am­ple has or­der di­men­sion 2. We can draw it in a 2-di­men­sional space like this:

Note that, if there are in­finitely many pos­si­bil­ities, then the or­der di­men­sion can be in­finite—we may need in­finitely many agents to rep­re­sent some prefer­ences. But as long as the pos­si­bil­ities are finite, the or­der di­men­sion will be as well.

Path-Dependence

So far, we’ve in­ter­preted “miss­ing” prefer­ences as “agent prefers to stay in cur­rent state”. One im­por­tant rea­son for that in­ter­pre­ta­tion is that it’s ex­actly what we need in or­der to han­dle path-de­pen­dent prefer­ences.

In prac­tice, path-de­pen­dent prefer­ences mostly mat­ter for sys­tems with “hid­den state”: in­ter­nal vari­ables which can change in re­sponse to the sys­tem’s choices. A great ex­am­ple of this is fi­nan­cial mar­kets: they’re the ur-ex­am­ple of effi­ciency and in­ex­ploita­bil­ity, yet it turns out that a mar­ket does not have a util­ity func­tion in gen­eral (economists call this “nonex­is­tence of a rep­re­sen­ta­tive agent”). The rea­son is that the dis­tri­bu­tion of wealth across the mar­ket’s agents func­tions as an in­ter­nal hid­den vari­able. Depend­ing on what path the mar­ket fol­lows, differ­ent in­ter­nal agents end up with differ­ent amounts of wealth, and the mar­ket as a whole will hold differ­ent port­fo­lios as a re­sult—even if the ex­ter­nally-visi­ble vari­ables, i.e. prices, end up the same.

Most path-de­pen­dence re­sults from some hid­den state di­rectly, but even if we don’t know the hid­den state, we can always add hid­den state in or­der to model path-de­pen­dence. When­ever fu­ture prefer­ences differ based on how the sys­tem reached the cur­rent state, we just split the state into two states—one for each pos­si­bil­ity. Then we re­peat, un­til we have a full set of states with path-in­de­pen­dent prefer­ences be­tween them. Th­ese new states are “full” states of the sys­tem; from out­side, some of them look the same.

An ex­am­ple: sup­pose I pre­fer New York to Bos­ton if I just came from DC, but Bos­ton to New York if I just came from Philadelphia.

We can rep­re­sent that with hid­den state:

We now have two sep­a­rate hid­den in­ter­nal nodes, which both cor­re­spond to the same ex­ter­nally-visi­ble state “New York”.

Now the key piece: there is no way to get from the “New York (from Philly)” node di­rectly from the “New York (from DC)” node. The agent does not, and can­not, have a prefer­ence be­tween these two nodes. Analo­gously, a mar­ket can­not have a prefer­ence be­tween two differ­ent wealth dis­tri­bu­tions—the sub­agents who com­prise a mar­ket will never spon­ta­neously de­cide to re­dis­tribute their wealth amongst them­selves. They always “pre­fer” (or “de­cide”) to stay in what­ever state they’re cur­rently in.

This is why we need to un­der­stand in­com­plete prefer­ences in or­der to han­dle path-de­pen­dent prefer­ences: hid­den state cre­ates situ­a­tions where the agent “prefers” to stay in what­ever state they’re in.

Now we can eas­ily model the sys­tem us­ing sub­agents ex­actly as we did for in­com­plete prefer­ences. We have a di­rected prefer­ence graph be­tween full states (in­clud­ing hid­den state), it needs to be acyclic to avoid throw­ing away re­sources, so we can find a set of sub­agents to rep­re­sent the prefer­ences. In the case of a mar­ket, this is just the sub­agents which com­prise the mar­ket: they’ll take a trade if it does not de­crease the util­ity of any sub­agent. (Note, how­ever, that the same ex­ter­nally-visi­ble trade can cor­re­spond to mul­ti­ple pos­si­ble in­ter­nal state changes; the sub­agents will take the trade if any of the pos­si­ble in­ter­nal state changes are non-util­ity-de­creas­ing for all of them. For a mar­ket, this means they can trade amongst them­selves in re­sponse to the ex­ter­nal trade in or­der to make ev­ery­one happy.)

Ap­pli­ca­tions & Speculations

We’ve just ar­gued that a sys­tem with con­sis­tent prefer­ences can be mod­el­led as a com­mit­tee of util­ity-max­i­miz­ing agents. How does this change our in­ter­pre­ta­tion and pre­dic­tions of the world?

First and fore­most: the sub­agents ar­gu­ment is a gen­er­al­iza­tion of the stan­dard acyclic prefer­ences ar­gu­ment. Any­time we might want to use the acyclic prefer­ences ar­gu­ment, but there’s no rea­son for the sys­tem to be path-in­de­pen­dent, we can ap­ply the sub­agents ar­gu­ment in­stead. In prac­tice, we usu­ally ex­pect sys­tems to be effi­cient/​in­ex­ploitable be­cause of some se­lec­tion pres­sure (evolu­tion, mar­ket com­pe­ti­tion, etc) - and that se­lec­tion pres­sure usu­ally doesn’t care about path de­pen­dence in and of it­self.

Main take­away: pretty much any­where we’d use an agent with a util­ity func­tion to model some­thing, we can ap­ply the sub­agents ar­gu­ment and use a com­mit­tee of agents with util­ity func­tions in­stead. In par­tic­u­lar, this is a good re­place­ment for “weak” util­ity func­tions.

Hu­mans are a par­tic­u­larly in­ter­est­ing ex­am­ple. We’d nor­mally use the acyclic prefer­ences ar­gu­ment (among other ar­gu­ments) to ar­gue that hu­mans ap­prox­i­mate util­ity-max­i­miz­ers in most situ­a­tions. But there’s no par­tic­u­lar rea­son to as­sume path-in­de­pen­dence; in­deed, hu­man be­hav­ior looks highly path-de­pen­dent. So, ap­ply the sub­agents ar­gu­ment. Hy­poth­e­sis: hu­man be­hav­ior ap­prox­i­mates the choices of a com­mit­tee of util­ity-max­i­miz­ing agents in most situ­a­tions.

Sound fa­mil­iar? The sub­agents ar­gu­ment offers a the­o­ret­i­cal ba­sis for the idea that hu­mans have lots of in­ter­nal sub­agents, with com­pet­ing wants and needs, all con­stantly ne­go­ti­at­ing with each other to de­cide on ex­ter­nally-visi­ble be­hav­ior.

In prin­ci­ple, we could test this hy­poth­e­sis more rigor­ously. Lots of peo­ple think of AI “learn­ing what hu­mans want” by ask­ing ques­tions or offer­ing choices or run­ning simu­la­tions. Per­son­ally, I pic­ture an AI tak­ing in a scan of a full hu­man con­nec­tome, then di­rectly calcu­lat­ing the em­bed­ded prefer­ences. Some­day, this will be pos­si­ble. When the AI solves those equa­tions, do we ex­pect it to find a sin­gle generic op­ti­mizer em­bed­ded in the sys­tem, ap­prox­i­mately op­ti­miz­ing some “util­ity”? Or do we ex­pect to find a bunch of sep­a­rate generic op­ti­miz­ers, ap­prox­i­mately op­ti­miz­ing sev­eral differ­ent “util­ities”, and ne­go­ti­at­ing with each other? Prob­a­bly nei­ther pic­ture is com­plete yet, but I’d bet the sec­ond is much closer to re­al­ity.

Conclusion

Let’s re­cap:

  • The acyclic prefer­ences ar­gu­ment is the eas­iest en­try point for effi­ciency/​in­ex­ploita­bil­ity-im­plies-util­ity-max­i­miza­tion the­o­rems, but it doesn’t han­dle lots of im­por­tant things, in­clud­ing path de­pen­dence.

  • Mar­kets, for ex­am­ple, are effi­cient/​in­ex­ploitable but can’t be rep­re­sented by a util­ity func­tion. They have hid­den in­ter­nal state—the dis­tri­bu­tion of wealth over agents—which makes their prefer­ences path-de­pen­dent.

  • The sub­agents ar­gu­ment says that any sys­tem with de­ter­minis­tic, effi­cient/​in­ex­ploitable prefer­ences can be rep­re­sented by a com­mit­tee of util­ity-max­i­miz­ing agents—even if the sys­tem has path-de­pen­dent or in­com­plete prefer­ences.

  • That means we can sub­sti­tute com­mit­tees in many places where we cur­rently use util­ities. For in­stance, it offers a the­o­ret­i­cal foun­da­tion for the idea that hu­man be­hav­ior is de­scribed by many ne­go­ti­at­ing sub­agents.

One big piece which we haven’t touched at all is un­cer­tainty. An ob­vi­ous gen­er­al­iza­tion of the sub­agents ar­gu­ment is that, once we add un­cer­tainty (and a no­tion of effi­ciency/​in­ex­ploita­bil­ity which ac­counts for it), an effi­cient/​in­ex­ploitable path-de­pen­dent sys­tem can be rep­re­sented by a com­mit­tee of Bayesian util­ity max­i­miz­ers. I haven’t even started to tackle that con­jec­ture yet; it’s a wide-open prob­lem.