The Credit Assignment Problem
This post is eventually about partial agency. However, it’s been a somewhat tricky point for me to convey; I take the long route. Epistemic status: slightly crazy.
I’ve occasionally said that everything boils down to credit assignment problems.
One big area which is “basically credit assignment” is mechanism design. Mechanism design is largely about splitting gains from trade in a way which rewards cooperative behavior and punishes uncooperative behavior. Many problems are partly about mechanism design:
Building functional organizations;
Designing markets to solve problems (such as prediction markets, or kidney-transplant trade programs);
Law, and law enforcement;
Practical coordination problems, such as splitting rent;
Social norms generally;
Philosophical issues in ethics/morality (justice, fairness, contractualism, issues in utilitarianism).
Another big area which I claim as “basically credit assignment” (perhaps more controversially) is artificial intelligence.
In the 1970s, John Holland kicked off the investigation of learning classifier systems. John Holland had recently invented the Genetic Algorithms paradigm, which applies an evolutionary paradigm to optimization problems. Classifier systems were his attempt to apply this kind of “adaptive” paradigm (as in “complex adaptive systems”) to cognition. Classifier systems added an economic metaphor to the evolutionary one; little bits of thought paid each other for services rendered. The hope was that a complex ecology+economy could develop, solving difficult problems.
One of the main design features on which classifier systems differ is on details of the virtual economy—that is, the credit assignment algorithm. An early proposal was the bucket-brigade algorithm. Reward is assigned to cognitive procedures which produce good outputs. These procedures pass reward back to the procedures which activated them, who similarly pass reward back in turn. This way, the economy supports chains of useful procedures.
Unfortunately, the bucket-brigade algorithm was vulnerable to parasites. Malign cognitive procedures could gain wealth by activating useful procedures without really contributing anything. This problem proved difficult to solve. Taking the economy analogy seriously, we might want cognitive procedures to decide intelligently who to pay for services. But, these are supposed to be itty bitty fragments of our thought process. Deciding how to pass along credit is a very complex task. Hence the need for a pre-specified solution such as bucket-brigade.
The difficulty of the credit assignment problem lead to a split in the field. Kenneth de Jong and Stephanie Smith founded a new approach, “Pittsburgh style” classifier systems. John Holland’s original vision became “Michigan style”.
Pittsburgh style classifier systems evolve the entire set of rules, rather than trying to assign credit locally. A set of rules will stand or fall together, based on overall performance. This abandoned John Holland’s original focus on online learning. Essentially, the Pittsburgh camp went back to plain genetic algorithms, albeit with a special representation.
(I’ve been having some disagreements with Ofer, in which Ofer suggests that genetic algorithms are relevant to my recent thoughts on partial agency, and I object on the grounds that the phenomena I’m interested in have to do with online learning, rather than offline. In my imagination, arguments between the Michigan and Pittsburgh camps would have similar content.)
Ok. That was then, this is now. Everyone uses gradient descent these days. What’s the point of bringing up a three-decade-old debate about obsolete paradigms in AI?
What Is Credit Assignment?
I’ve said that classifier systems faced a credit assignment problem. What does that mean, exactly?
The definition I want to use for this essay is:
You’re engaged in some kind of task;
you use some kind of structured strategy (such as a neural network, or a program, or a set of people);
you receive some kind of feedback about how well you did;
you want to figure out how to use that feedback to improve your strategy.
So, credit assignment is the problem of turning feedback into strategy improvements.
The bucket-brigade algorithm tried to do this locally, meaning, individual itty-bitty pieces get positive/negative credit. In the light of history, we could say that the Michigan/Pittsburgh distinction conflated local-vs-global search with online-vs-offline. There’s no necessary connection between those two; online learning is compatible with assignment of local credit.
In practice, two big innovations made the Michigan/Pittsburgh debate obsolete: backprop, and Q-learning. Backprop turned global feedback into local. Q-learning provided a way to assign credit in online contexts.
I think people generally understand the contribution of backprop and its importance. Backprop is essentially the correct version of what bucket-brigade was overtly trying to do: pass credit back along chains. Bucket-brigade wasn’t quite right in how it did this, but backprop corrects the problems.
So what’s the importance of Q-learning? I want to discuss that in more detail.
The Conceptual Difficulty of ‘Online Search’
In online learning, you are repeatedly producing outputs of some kind (call them “actions”) while repeatedly getting feedback of some kind (call it “reward”). But, you don’t know how to associate particular actions (or combinations of actions) with particular rewards. I might take the critical action at time 12, and not see the payoff until time 32.
In offline learning, you can solve this with a sledgehammer: you can take the total reward over everything, with one fixed internal architecture. You can try out different internal architectures and see how well each do. (This may be far from the most efficient way of doing things, even in the offline case; but, you can do it.)
Basically, in offline learning, you have a function you can optimize. In online learning, you don’t.
Backprop is just a computationally efficient way to do hillclimbing search, where we repeatedly look for small steps which improve the overall fitness. But how do you do this if you don’t have a fitness function?
Q-learning and other reinforcement learning techniques provide a way to define the equivalent of a fitness function for online problems, so that you can learn.
Models to the Rescue
How do you solve the approach of associating rewards with actions?
I’m going to make a bold claim: you can’t solve the action/reward matching problem without some kind of model.
For example, if we make an episodic assumption, we can assign rewards within an episode boundary to the actions within that same episode boundary.
Q-learning makes an assumption that the state is fully observable, amongst other assumptions.
Naturally, we would like to reduce the strengths of the assumptions we have to make as much as we can. One way is to look at increasingly rich model classes. AIXI uses all computable models. But maybe “all computable models” is still too restrictive; we’d like to get results without assuming a grain of truth. (That’s why I am not really discussing Bayesian models much in this post; I don’t want to assume a grain of truth..) So we back off even further, and use logical induction. Ok, sure.
But wouldn’t the best way be to try to learn without models at all? That way, we reduce our “modeling assumptions” to zero.
After all, there’s something in machine learning called “model free learning”, right?
Here’s where my bold claim comes in: I’m claiming that even “model free” methods actually have a “model” of sorts.
How does model-free learning work? Well, often you work with a simulable environment, which means you can estimate the quality of a policy by running it many times, and use algorithms such as policy-gradient to learn. This is called “model free learning” because the learning part of the algorithm doesn’t try to predict the consequences of actions; you’re just learning which action to take. From our perspective here, though, this is 100% cheating; you can only learn because you have a good model of the environment.
A more general approach to model-free learning is actor-critic learning. The “actor” is the policy we are learning. The “critic” is a learned estimate of how good things are looking given the history. IE, we learn to estimate the expected value—not just the next reward, but the total future discounted reward.
Unlike the reward, the expected value solves the credit assignment for us. Imagine we can see the “true” expected value. If we take an action and then the expected value increases, we know the action was good (in expectation). If we take an action and expected value decreases, we know it was bad (in expectation).
So, actor-critic works by (1) learning to estimate the expected value; (2) using the current estimated expected value to give feedback to learn a policy.
What I want to point out here is that the critic still has “model” flavor. Actor-critic is called “model-free” because nothing is explicitly trained to anticipate the sensory observations, or the world-state. However, the critic is learning to predict; it’s just that all we need to predict is expected value.
Where Updates Come From
Here begins the crazier part of this post. This is all intuitive/conjectural.
Claim: in order to learn, you need to obtain an “update”/”gradient”, which is a direction (and magnitude) you can shift in which is more likely than not an improvement.
Claim: predictive learning gets gradients “for free”—you know that you want to predict things as accurately as you can, so you move in the direction of whatever you see. With Bayesian methods, you increase the weight of hypotheses which would have predicted what you saw; with gradient-based methods, you get a gradient in the direction of what you saw (and away from what you didn’t see).
Claim: if you’re learning to act, you do not similarly get gradients “for free”. You take an action, and you see results of that one action. This means you fundamentally don’t know what would have happened had you taken alternate actions, which means you don’t have a direction to move your policy in. You don’t know whether alternatives would have been better or worse. So, rewards you observe seem like not enough to determine how you should learn.
Claim: you have to get gradients from a source that already has gradients. We saw that model-free learning works by splitting up the task into (1) learning to anticipate expected value; (2) learning a good policy via the gradients we can get from (1).
What it means for a learning problem to “have gradients” is just that the feedback you get tells you how to learn. Predictive learning problems (supervised or unsupervised) have this; they can just move toward what’s observed. Offline problems have this; you can define one big function which you’re trying to optimize. Learning to act online doesn’t have this, however, because it lacks counterfactuals.
The Gradient Gap
(I’m going to keep using the terms ‘gradient’ and ‘update’ in a more or less interchangeable way here; this is at a level of abstraction where there’s not a big distinction.)
I’m going to call the “problem” the gradient gap. I want to call it a problem, even though we know how to “close the gap” via predictive learning (whether model-free or model-based). The issue with this solution is only that it doesn’t feel elegant. It’s weird that you have to run two different backprop updates (or whatever learning procedures you use); one for the predictive component, and another for the policy. It’s weird that you can’t “directly” use feedback to learn to act.
Why should we be interested in this “problem”? After all, this is a basic point in decision theory: to maximize utility under uncertainty, you need probability.
One part of it is that I want to scrap classical (“static”) decision theory and move to a more learning-theoretic (“dynamic”) view. In both AIXI and logical-induction based decision theories, we get a nice learning-theoretic foundation for the epistemics (solomonoff induction/logical induction), but, we tack on a non-learning decision-making unit on top. I have become skeptical of this approach. It puts the learning into a nice little box labeled “epistemics” and then tries to make a decision based on the uncertainty which comes out of the box. I think maybe we need to learn to act in a more fundamental fashion.
A symptom of this, I hypothesize, is that AIXI and logical induction DT don’t have very good learning-theoretic properties. [AIXI’s learning problems; LIDT’s learning problems.] You can’t say very much to recommend the policies they learn, except that they’re optimal according to the beliefs of the epistemics box—a fairly trivial statement, given that that’s how you decide what action to take in the first place.
Now, in classical decision theory, there’s a nice picture where the need for epistemics emerges nicely from the desire to maximize utility. The complete class theorem starts with radical uncertainty (ie, non-quantitative), and derives probabilities from a willingness to take pareto improvements. That’s great! I can tell you why you should have beliefs, on pragmatic grounds! What we seem to have in machine learning is a less nice picture, in which we need epistemics in order to get off the ground, but can’t justify the results without circular reliance on epistemics.
So the gap is a real issue—it means that we can have nice learning theory when learning to predict, but we lack nice results when learning to act.
This is the basic problem of credit assignment. Evolving a complex system, you can’t determine which parts to give credit to success/failure (to decide what to tweak) without a model. But the model is bound to be a lot of the interesting part! So we run into big problems, because we need “interesting” computations in order to evaluate the pragmatic quality/value of computations, but we can’t get interesting computations to get ourselves started, so we need to learn...
Essentially, we seem doomed to run on a stratified credit assignment system, where we have an “incorruptible” epistemic system (which we can learn because we get those gradients “for free”). We then use this to define gradients for the instrumental part.
A stratified system is dissatisfying, and impractical. First, we’d prefer a more unified view of learning. It’s just kind of weird that we need the two parts. Second, there’s an obstacle to pragmatic/practical considerations entering into epistemics. We need to focus on predicting important things; we need to control the amount of processing power spent; things in that vein. But (on the two-level view) we can’t allow instrumental concerns to contaminate epistemics! We risk corruption! As we saw with bucket-brigade, it’s easy for credit assignment systems to allow parasites which destroy learning.
A more unified credit assignment system would allow those things to be handled naturally, without splitting into two levels; as things stand, any involvement of pragmatic concerns in epistemics risks the viability of the whole system.
Tiling Concerns & Full Agency
From the perspective of full agency (ie, the negation of partial agency), a system which needs a protected epistemic layer sounds suspiciously like a system that can’t tile. You look at the world, and you say: “how can I maximize utility?” You look at your beliefs, and you say: “how can I maximize accuracy?” That’s not a consequentialist agent; that’s two different consequentialist agents! There can only be one king on the chessboard; you can only serve one master; etc.
If it turned out we really really need two-level systems to get full agency, this would be a pretty weird situation. “Agency” would seem to be only an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises. An agent which “breaks free” would then free itself of the structure which allowed it to be an agent in the first place.
On the other hand, from a partial-agency perspective, this kind of architecture could be perfectly natural. IE, if you have a learning scheme from which total agency doesn’t naturally emerge, then there isn’t any fundamental contradiction in setting up a system like this.
Part of the (potentially crazy) claim here is that having models always gives rise to some form of myopia. Even logical induction, which seems quite unrestrictive, makes LIDT fail problems such as ASP, making it myopic according to the second definition of my previous post. (We can patch this with LI policy selection, but for any particular version of policy selection, we can come up with decision problems for which it is “not updateless enough”.) You could say it’s myopic “across logical time”, whatever that means.
If it were true that “learning always requires a model” (in the sense that learning-to-act always requires either learning-to-predict or hard-coded predictions), and if it were true that “models always give rise to some form of myopia”, then this would confirm my conjecture in the previous post (that no learning scheme incentivises full agency).
This is all pretty out there; I’m not saying I believe this with high probability.
Evolution & Evolved Agents
Evolution is a counterexample to this view: evolution learns the policy “directly” in essentially the way I want. This is possible because evolution “gets the gradients for free” just like predictive learning does: the “gradient” here is just the actual reproductive success of each genome.
Unfortunately, we can’t just copy this trick. Artificial evolution requires that we decide how to kill off / reproduce things, in the same way that animal breeding requires breeders to decide what they’re optimizing for. This puts us back at square one; IE, needing to get our gradient from somewhere else.
Does this mean the “gradient gap” is a problem only for artificial intelligence, not for natural agents? No. If it’s true that learning to act requires a 2-level system, then evolved agents would need a 2-level system in order to learn within their lifespan; they can’t directly use the gradient from evolution, since it requires them to die.
Also, note that evolution seems myopic. (This seems complicated, so I don’t want to get into pinning down exactly in which senses evolution is myopic here.) So, the case of evolution seems compatible with the idea that any gradients we can actually get are going to incentivize myopic solutions.
Similar comments apply to markets vs firms.