Vingean Agency

I’ve been involved with several discussions about different notions of agency (and their importance/​relationships) lately, especially with the PIBBSS group including myself, Daniel, Josiah, and Ramana; see here.

There’s one notion of agency (not necessarily “The” notion of agency, but a coherent and significant notion) which vanishes if you examine it too closely.

Imagine that Alice is “smarter than Bob in every way”—that is, Bob believes that Alice knows everything Bob knows, and possibly more. Bob doesn’t necessarily agree with Alice’s goals, but Bob expects Alice to pursue them effectively. In particular, Bob expects Alice’s actions to be at least as effective as the best plan Bob can think of.

Because Bob can’t predict what Alice will do, the only way Bob can further constrain his expectations is to figure out what’s good/​bad for Alice’s objectives. In some sense this seems like a best-case for Bob modeling Alice as an agent: Bob understands Alice purely by understanding her as a goal-seeking force.

I’ll call this Vingean agency, since Vinge talked about the difficulty of predicting agents who are smarter than you. and since this usage is consistent with other uses of the term “Vingean” in relation to decision theory.

However Vingean agency might seem hard to reconcile with other notions of agency. We typically think of “modeling X as an agent” as involving attribution of beliefs to X, not just goals. Agents have probabilities and utilities.

Bob has minimal use for attributing beliefs to Alice, because Bob doesn’t think Alice is mistaken about anything—the best he can do is to use his own beliefs as a proxy, and try to figure out what Alice will do based on that.[1]

When I say Vingean agency “disappears when we look at it too closely”, I mean that if Bob becomes smarter than Alice (understands more about the world, or has a greater ability to calculate the consequences of his beliefs), Alice’s Vingean agency will vanish.

We can imagine a spectrum. At one extreme is an Alice who knows everything Bob knows and more, like we’ve been considering so far. At the other extreme is an Alice whose behavior is so simple that Bob can predict it completely. In between these two extremes are Alices who know some things that Bob doesn’t know, while also lacking some information which Bob has.

(Arguably, Eliezer’s notion of optimization power is one formalization of Vingean agency, while Alex Flint’s attraction-basin notion of optimization defines a notion of agency at the opposite extreme of the spectrum, where we know everything about the whole system and can predict its trajectories through time.)

I think this spectrum may be important to keep in mind when modeling different notions of agency. Sometimes we analyze agents from a logically omniscient perspective. In representation theorems (such as Savage or Jeffrey-Bolker, or their lesser sibling, VNM) we tend to take on a perspective where we can predict all the decisions of an agent (including hypothetical decisions which the agent will never face in reality). From this omniscient perspective, we then seek to represent the agent’s behavior by ascribing it beliefs and real-valued preferences (ie, probabilities and expected utilities).

However, this omniscient perspective eliminates Vingean agency from the picture. Thus, we might lose contact with one of the important pieces of the “agent” phenomenon, which can only be understood from a more bounded perspective.[2]

  1. ^

    On the other hand, if Bob knows Alice wants cheese, then as soon as Alice starts moving in a given direction, Bob might usefully conclude “Alice probably thinks cheese is in that direction”. So modeling Alice as having beliefs is certainly not useless for Bob. Still, because Bob thinks Alice knows better about everything, Bob’s estimate of Alice’s beliefs always matches Bob’s estimate of his own beliefs, in expectation. So in that sense, Bob doesn’t need to track Alice’s beliefs separately from his own. When Alice turns left, Bob can simply conclude “so there’s probably cheese in that direction” rather than tracking his and Alice’s beliefs separately.

  2. ^

    I also think it’s possible that Vingean agency can be extended to be “the” definition of agency, if we think that agency is just Vingean agency from some perspective. For example, ants have minimal Vingean agency from my perspective, because I already understand how they find the food in my house. However, I can easily inhabit a more naïve perspective in which this unexplained. Indeed, it’s computationally efficient for me to model ants this way most of the time—ants simply find the food. It doesn’t matter how they do it.