Partial Agency

Epistemic status: very rough intuitions here.

I think there’s something interesting going on with Evan’s notion of myopia.

Evan has been calling this thing “myopia”. Scott has been calling it “stop-gradients”. In my own mind, I’ve been calling the phenomenon “directionality”. Each of these words gives a different set of intuitions about how the cluster could eventually be formalized.

Stop-Gradients

Nash equilibria are, abstractly, modeling agents via an equation like . In words: is the agent’s mixed strategy. The payoff is a function of the mixed strategy in two ways: the first argument is the causal channel, where actions directly have effects; the second argument represents the “acausal” channel, IE, the fact that the other players know the agent’s mixed strategy and this influences their actions. The agent is maximizing across the first channel, but “ignoring” the second channel; that is why we have to solve for a fixed point to find Nash equilibria. This motivates the notion of “stop gradient”: if we think in terms of neural-network type learning, we’re sending the gradient through the first argument but not the second. (It’s a kind of mathematically weird thing to do!)

Myopia

Thinking in terms of iterated games, we can also justify the label “myopia”. Thinking in terms of “gradients” suggests that we’re doing some kind of training involving repeatedly playing the game. But we’re training an agent to play as if it’s a single-shot game: the gradient is rewarding behavior which gets more reward within the single round even if it compromises long-run reward. This is a weird thing to do: why implement a training regime to produce strategies like that, if we believe the nash-equilibrium model, IE we think the other players will know our mixed strategy and react to it? We can, for example, win chicken by going straight more often than is myopically rational. Generally speaking, we expect to get better rewards in the rounds after training if we optimized for non-myopic strategies during training.

Directionality

To justify my term “directionality” for these phenomena, we have to look at a different example: the idea that “when beliefs and reality don’t match, we change our beliefs”. IE: when optimizing for truth, we optimize “only in one direction”. How is this possible? We can write down a loss function, such as Bayes’ loss, to define accuracy of belief. But how can we optimize it only “in one direction”?

We can see that this is the same thing as myopia. When training predictors, we only consider the efficacy of hypotheses one instance at a time. Consider supervised learning: we have “questions” etc and are trying to learn “answers” etc. If a neural network were somehow able to mess with the training data, it would not have much pressure to do so. If it could give an answer on instance which improved its ability to answer on by manipulating , the gradient would not specially favor this. Suppose it is possible to take some small hit (in log-loss terms) on for a large gain on . The large gain for would not reinforce the specific neural patterns responsible for making easy (only the patterns responsible for successfully taking advantage of the easiness). The small hit on means there’s an incentive not to manipulate .

It is possible that the neural network learns to manipulate the data, if by chance the neural patterns which shift are the same as those which successfully exploit the manipulation at . However, this is a fragile situation: if there are other neural sub-patterns which are equally capable of giving the easy answer on , the reward gets spread around. (Think of these as parasites taking advantage of the manipulative strategy without doing the work necessary to sustain it.) Because of this, the manipulative sub-pattern may not “make rent”: the amount of positive gradient it gets may not make up for the hit it takes on . And all the while, neural sub-patterns which do better on (by refusing to take the hit) will be growing stronger. Eventually they can take over. This is exactly like myopia: strategies which do better in a specific case are favored for that case, despite global loss. The neural network fails to successfully coordinate with itself to globally minimize loss.

To see why this is also like stop-gradients, think about the loss function as : the neural weights determine loss through a “legitimate” channel (the prediction quality on a single instance), plus an “illegitimate” channel (the cross-instance influence which allows manipulation of through the answer given for ). We’re optimizing through the first channel, but not the second.

The difference between supervised learning and reinforcement learning is just: reinforcement learning explicitly tracks helpfulness of strategies across time, rather than assuming a high score at has to do with only behaviors at ! As a result, RL can coordinate with itself across time, whereas supervised learning cannot.

Keep in mind that this is a good thing: the algorithm may be “leaving money on the table” in terms of prediction accuracy, but this is exactly what we want. We’re trying to make the map match the territory, not the other way around.

Important side-note: this argument obviously has some relation to the question of how we should think about inner optimizers and how likely we should expect them to be. However, I think it is not a direct argument against inner optimizers. (1) The emergence of an inner optimizer is exactly the sort of situation where the gradients end up all feeding through one coherent structure. Other potential neural structures cannot compete with the sub-agent, because it has started to intelligently optimize; few interlopers can take advantage of the benefits of the inner optimizer’s strategy, because they don’t know enough to do so. So, all gradients point to continuing the improvement of the inner optimizer rather than alternate more-myopic strategies. (2) Being an inner optimizer is non synonymous with non-myopic behavior. An inner optimizer could give myopic responses on the training set while internally having less-myopic values. Or, an inner optimizer could have myopic but very divergent values. Importantly, an inner optimizer need not take advantage of any data-manipulation of the training set like that I’ve described; it need not even have access to any such opportunities.

The Partial Agency Paradox

I’ve given a couple of examples. I want to quickly give some more to flesh out the clusters as I see them:

  • As I said, myopia is “partial agency” whereas foresight is “full agency”. Think of how an agent with high time-preference (ie steep temporal discounting) can be money-pumped by an agent with low time-preference. But the limit of no-temporal-discounting-at-all is not always well-defined.

  • An updatefull agent is “partial agency” whereas updatelessness is “full agency”: the updateful agent is failing to use some channels of influence to get what it wants, because it already knows those things and can’t imagine them going differently. Again, though, full agency seems to be an idealization we can’t quite reach: we don’t know how to think about updatelessness in the context of logical uncertainty, only more- or less- updatefull strategies.

  • I gave the beliefsterritory example. We can also think about the valuesterritory case: when the world differs from our preferences, we change the world, not our preferences. This has to do with avoiding wireheading.

  • Similarly, we can think of examples of corrigibility—such as respecting an off button, or avoiding manipulating the humans—as partial agency.

  • Causal decision theory is more “partial” and evidential decision theory is less so: EDT wants to recognize more things as legitimate channels of influence, while CDT claims they’re not. Keep in mind that the math of causal intervention is closely related to the math which tells us about whether an agent wants to manipulate a certain variable—so there’s a close relationship between CDT-vs-EDT and wireheading/​corrigibility.

I think people often take a pro- or anti- partial agency position: if you are trying to one-box in Newcomblike problems, trying to cooperate in prisoner’s dilemma, trying to define logical updatelessness, trying for superrationality in arbitrary games, etc… you are generally trying to remove barriers to full agency. On the other hand, if you’re trying to avert instrumental incentives, make sure an agent allows you to change its values, or doesn’t prevent you from pressing an off button, or doesn’t manipulate human values, etc… you’re generally trying to add barriers to full agency.

I’ve historically been more interested in dropping barriers to full agency. I think this is partially because I tend to assume that full agency is what to expect in the long run, IE, “all agents want to be full agents”—evolutionarily, philosophically, etc. Full agency should result from instrumental convergence. Attempts to engineer partial agency for specific purposes feel like fighting against this immense pressure toward full agency; I tend to assume they’ll fail. As a result, I tend to think about AI alignment research as (1) needing to understand full agency much better, (2) needing to mainly think in terms of aligning full agency, rather than averting risks through partial agency.

However, in contrast to this historical view of mine, I want to make a few observations:

  • Partial agency sometimes seems like exactly what we want, as in the case of mapterritory optimization, rather than a crude hack which artificially limits things.

  • Indeed, partial agency of this kind seems fundamental to full agency.

  • Partial agency seems ubiquitous in nature. Why should I treat full agency as the default?

So, let’s set aside pro/​con positions for a while. What I’m interested in at the moment is the descriptive study of partial agency as a phenomenon. I think this is an organizing phenomenon behind a lot of stuff I think about.

The partial agency paradox is: why do we see partial agency naturally arising in certain contexts? Why are agents (so often) myopic? Why have a notion of “truth” which is about mapterritory fit but not the other way around? Partial agency is a weird thing. I understand what it means to optimize something. I understand how a selection process can arise in the world (evolution, markets, machine learning, etc), which drives things toward maximization of some function. Partial optimization is a comparatively weird thing. Even if we can set up a “partial selection process” which incentivises maximization through only some channels, wouldn’t it be blind to the side-channels, and so unable to enforce partiality in the long-term? Can’t someone always come along and do better via full agency, no matter how our incentives are set up?

Of course, I’ve already said enough to suggest a resolution to this puzzle.

My tentative resolution to the paradox is: you don’t build “partial optimizers” by taking a full optimizer and trying to add carefully balanced incentives to create indifference about optimizing through a specific channel, or anything like that. (Indifference at the level of the selection process does not lead to indifference at the level of the agents evolved by that selection process.) Rather, partial agency is what selection processes incentivize by default. If there’s a learning-theoretic setup which incentivizes the development of “full agency” (whatever that even means, really!) I don’t know what it is yet.

Why?

Learning is basically episodic. In order to learn, you (sort of) need to do the same thing over and over, and get feedback. Reinforcement learning tends to assume ergodic environments so that, no matter how badly the agent messes up, it eventually re-enters the same state so it can try again—this is a “soft” episode boundary. Similarly, RL tends to require temporal discounting—this also creates a soft episode boundary, because things far enough in the future matter so little that they can be thought of as “a different episode”.

So, like mapterritory learning (that is, epistemic learning), we can kind of expect any type of learning to be myopic to some extent.

This fits the picture where full agency is an idealization which doesn’t really make sense on close examination, and partial agency is the more real phenomenon. However, this is absolutely not a conjecture on my part that all learning algorithms produce partial agents of some kind rather than full agents. There may still be frameworks which allow us to approach full agency in the limit, such as taking the limit of diminishing discount factors, or considering asymptotic behavior of agents who are able to make precommitments. We may be able to achieve some aspects of full agency, such as superrationality in games, without others.

Again, though, my interest here is more to understand what’s going on. The point is that it’s actually really easy to set up incentives for partial agency, and not so easy to set up incentives for full agency. So it makes sense that the world is full of partial agency.

Some questions:

  • To what extent is it really true that settings such as supervised learning disincentivize strategic manipulation of the data? Can my argument be formalized?

  • If thinking about “optimizing a function” is too coarse-grained (a supervised learner doesn’t exactly minimize prediction error, for example), what’s the best way to revise our concepts so that partial agency becomes obvious rather than counterintuitive?

  • Are there better ways of characterizing the partiality of partial agents? Does myopia cover all cases (so that we can understand things in terms of time-preference), or do we need the more structured stop-gradient formulation in general? Or perhaps a more causal-diagram-ish notion, as my “directionality” intuition suggests? Do the different ways of viewing things have nice relationships to each other?

  • Should we view partial agents as multiagent systems? I’ve characterized it in terms of something resembling game-theoretic equilibrium. The ‘partial’ optimization of a function arises from the price of anarchy, or as it’s known around lesswrong, Moloch. Are partial agents really bags of full agents keeping each other down? This seems a little true, to me, but also doesn’t strike me as the most useful way of thinking about partial agents. For one thing, it takes full agents as a necessary concept to build up partial agents, which seems wrong to me.

  • What’s the relationship between the selection process (learning process, market, …) and the type of partial agents incentivised by it? If we think in terms of myopia: given a type of myopia, can we design a training procedure which tracks or doesn’t track the relevant strategic influences? If we think in terms of stop-gradients: we can take “stop-gradient” literally and stop there, but I suspect there is more to be said about designing training procedures which disincentivize the strategic use of specified paths of influence. If we think in terms of directionality: how do we get from the abstract “change the map to match the territory” to the concrete details of supervised learning?

  • What does partial agency say about inner optimizers, if anything?

  • What does partial agency say about corrigibility? My hope is that there’s a version of corrigibility which is a perfect fit in the same way that mapterritory optimization seems like a perfect fit.


Ultimately, the concept of “partial agency” is probably confused. The partial/​full clustering is very crude. For example, it doesn’t make sense to think of a non-wireheading agent as “partial” because of its refusal to wirehead. And it might be odd to consider a myopic agent as “partial”—it’s just a time-preference, nothing special. However, I do think I’m pointing at a phenomenon here, which I’d like to understand better.