I love this question! As it happens, I have some rough draft for a post titled something like “’reward is the optimization target for smart RL agents”.
TLDR: I think this is true for some AI systems, but not likely true for any RL-directed AGI systems whose safety we should really worry about. They’ll optimize for maximum reward even more than humans do, unless they’re very carefully built to avoid that behavior.
However, I should have stated up-front: This post addresses model-free policy gradient algorithms like PPO and REINFORCE.
Humans are definitely model-based RL learners at least some of the time—particularly for important decisions.[1] So the claim doesn’t apply to them. I also don’t think it applies to any other capable agent. TurnTrout actually makes a congruent claim in his other post Think carefully before calling RL policies “agents”. Model-free RL algorithms only have limited agency, what I’d call level 1-of-3:
Trained to achieve some goal/reward.
Habitual behavior/model-free RL
Predicts outcomes of actions and selects ones that achieve a goal/reward.
Model-based RL
Selects future states that achieve a goal/reward and then plans actions to achieve that state
No corresponding terminology, (goal-directed from neuroscience applies to levels 2 and even 1[1]) but pretty clearly highly useful for humans
But humans don’t seem to optimize for reward all that often! They make self-sacrificial decisions that get them killed. And they usually say they’d refuse to get in Nozick’s experience machine, which would hypothetically remove them from this world and give them a simulated world of maximally-rewarding experiences. They’re seeming to optimize for the things that have given them reward, like protecting loved ones, rather than optimizing for reward themselves—just like TurnTrout describes in RINTOT. And humans are model-based for important decisions, presumably using sophisticated models. What gives?
My cognitive neuroscience research focused a lot on dopamine, so I’ve thought a lot about how reward shapes human behavior. The most complete publication is Neural mechanisms of human decision-making as a summary of how humans seem to learn complex behaviors using reward and predictions of reward. But that’s not really very good description of the overall theory, because neuroscientists are highly suspicious of broad theories, and because I didn’t really want to accidentally accelerate AGI research by describing brain function clearly. I know.
I think humans do optimize for reward, we just do it badly. We do see some sophisticated hedonists with exceptional amounts of time and money say things like “I love new experiences”. This has abstracted almost all of the specifics. Yudkowsky’s “fun theory” also describes a pursuit of reward if you grant that “fun” refers to frequent, strong dopamine spikes (I think that’s exactly what we mean by fun). I think more sophisticated hedonists will get in the experience box- but this is complicated by the approximations in human decision-making. It’s pretty likely that the suffering you’d cause your loved ones by getting in the box and leaving them alone would be so salient, and produce such a negative-reward-prediction, that it would outweigh all of the many positive predictions of reward, just based on saliency and our inefficient way of roughly totaling predicted future reward by imagining salient outcomes and roughly averaging over their reward predictions.
So I think the more rational and cognitively capable a human is, the more likely they’ll optimize more strictly and accurately for future reward. And I think the same is true of model-based RL systems with any decent decision-making process.
I realize this isn’t the empirically-based answer you asked for. I think the answer has to be based on theory, because some systems will and some won’t optimize for reward. I don’t know the ML RL literature nearly as well as I know the neuroscience RL literature, so there might be some really relevant stuff out there I’m not aware of. I doubt it, because this is such an AI-safety question.[2]
So that’s why I think reward is the optimization target for smart RL agents.
Edit: Thus, RINTOT and similar work has, I think, really confused the AGI safety debate by making strong claims about current AI that don’t apply at all to the AGI we’re worried about. I’ve been thinking about this a lot in the context of a post I’d call “Current AI and alignment theory is largely behaviorist. Expect a cognitive revolution”.
We debated the terminologies habitual/goal-directed, automatic and controlled, system 1/system 2, and model-free/model-based for years. All of them have limitations, and all of them mean slightly different things. In particular, model-based is vague terminology when systems get more complex than simple RL—but it is very clear that many complex human decisions (certainly ones in which we envision possible outcomes before taking actions) are far on the model-based side, and meet every definition.
One follow-on question is whether RL-based AGI will wirehead. I think this is almost the same question as getting into the experience box—except that that box will only keep going if the AGI engineers it correctly to keep going. So it’s going to have to do a lot of planning before wireheading, unless its decision-making algorithm is highly biased toward near-term rewards over long-term ones. In the course of doing that planning, its other motivations will come into play—like the well-being of humans, if it cares about that. So whether or not our particular AGI will wirehead probably won’t determine our fate.
I’d also accept neuroscience RL literature, and also accept theories that would make useful predictions or give conditions on when RL algorithms optimize for the reward, not just empirical results.
That’s probably as much of that post as I’ll get around to. It’s not high on my priority list because I don’t see how it’s a crux for any important alignment theory. I may cover what I think is important about it in the “behaviorist...” post.
Edit: I was going to ask why you were thinking this was important.
It seems pretty cut and dried; even TurnTrout wasn’t claiming this was true beyond model-free RL. I guess LLMs are model-free, so that’s relevant. I just expect them to be turned into agents with explicit goals, so I don’t worry much about how they behave in base form.
Interesting. There’s certainly a lot going on in there, and some of it very likely is at least vague models of future word occurrences (and corresponding events). The definition of model-based gets pretty murky outside of classic RL, so it’s probably best to just directly discuss what model properties give rise to what behavior, e.g. optimizing for reward.
Model-free systems can produce goal-directed behavior. The do this if they have seen some relevant behavior that achieves a given goal, and their input or some internal representation includes the current goal, and they can generalize well enough to apply what they’ve experienced to the current context. (This is by the neuroscience definition of habitual vs goal-directed: behavior changes to follow the current goal, usually hungry, thirsty or not).
So if they’re strong enough generalizers, I think even a model-free system actually optimizes for reward.
I think the claim should be stronger: for a smart enough RL system, reward is the optimization target.
IMO, the important crux is whether we really need to secure the reward function from wireheading/tampering, because a RL algorithm optimizing for the reward means you will need to have much more security/make much more robust reward functions than in the case where RL algorithms don’t optimize for the reward, because optimization amplifies problems and solutions.
Ah yes. I agree that the wireheading question deserves more thought. I’m not confident that my answer to wireheading applies to the types of AI we’ll actually build—I haven’t thought about it enough.
FWIW the two papers I cited are secondary research, so they branch directly into a massive amount of neuroscience research that indirectly bears on the question in mammalian brains. None of it I can think of directly addresses the question of whether reward is the optimization target for humans. I’m not sure how you’d empirically test this.
I do think it’s pretty clear that some types of smart, model-based RL agents would optimize for reward. Those are the ones that a) choose actions based on highest estimated sum of future rewards (like humans seem to, very very approximately), and that are smart enough to estimate future rewards fairly accurately.
LLMs with RLHF/RLAIF may be the relevant case. They are model-free by TurnTrout’s definition, and I’m happy to accept his use of the terminology. But they do have a powerful critic component (at least in training—I’m not sure about deployment, but probably there too)0, so it seems possible that it might develop a highly general representation of “stuff that gives the system rewards”. I’m not worried about that, because I think that will happen long after we’ve given them agentic goals, and long after they’ve developed a representation of “stuff humans reward me for doing”—which could be mis-specified enough to lead to doom if it was the only factor.
So I think the more rational and cognitively capable a human is, the more likely they’ll optimize more strictly and accurately for future reward.
If this is true at all, it’s not going to be a very strong effect, meaning you can find very rational and cognitively capable people who do the opposite of this in decision situations that directly pit reward against the things they hold most dearly. (And it may not be true because a lot of personal hedonists tend to “lack sophistication,” in the sense that they don’t understand that their own feelings of valuing nothing but their own pleasure is not how everyone else who’s smart experiences the world. So, there’s at least a midwit level of “sophistication” where hedonists seem overrepresented.)
Maybe it’s the case that there’s a weak correlation that makes the quote above “technically accurate,” but that’s not enough to speak of reward being the optimization target. For comparison, even if it is the case that more intelligent people prefer classical music over k-pop, that doesn’t mean classical music is somehow inherently superior to k-pop, or that classical music is “the music taste target” in any revealing or profound sense. After all, some highly smart people can still be into k-pop without making any mistake.
I’ve written about this extensively here and here. Some relevant exercepts from the first linked post:
One of many takeaways I got from reading Kaj Sotala’s multi-agent models of mind sequence (as well as comments by him) is that we can model people as pursuers of deep-seated needs. In particular, we have subsystems (or “subagents”) in our minds devoted to various needs-meeting strategies. The subsystems contribute behavioral strategies and responses to help maneuver us toward states where our brain predicts our needs will be satisfied. We can view many of our beliefs, emotional reactions, and even our self-concept/identity as part of this set of strategies. Like life plans, life goals are “merely” components of people’s needs-meeting machinery.[8]
Still, as far as components of needs-meeting machinery go, life goals are pretty unusual. Having life goals means to care about an objective enough to (do one’s best to) disentangle success on it from the reasons we adopted said objective in the first place. The objective takes on a life of its own, and the two aims (meeting one’s needs vs. progressing toward the objective) come apart. Having a life goal means having a particular kind of mental organization so that “we” – particularly the rational, planning parts of our brain – come to identify with the goal more so than with our human needs.[9]
To form a life goal, an objective needs to resonate with someone’s self-concept and activate (or get tied to) mental concepts like instrumental rationality and consequentialism. Some life goals may appeal to a person’s systematizing tendencies and intuitions for consistency. Scrupulosity or sacredness intuitions may also play a role, overriding the felt sense that other drives or desires (objectives other than the life goal) are of comparable importance.
[...]
Adopting an optimization mindset toward outcomes inevitably leads to a kind of instrumentalization of everything “near term.” For example, suppose your life goal is about maximizing the number of your happy days. The rational way to go about your life probably implies treating the next decades as “instrumental only.” On a first approximation, the only thing that matters is optimizing the chances of obtaining indefinite life extension (potentially leading to more happy days). Through adopting an outcome-focused optimizing mindset, seemingly self-oriented concerns such as wanting to maximize the number of happiness moments turn into an almost “other-regarding” endeavor. After all, only one’s far-away future selves get to enjoy the benefits – which can feel essentially like living for someone else.[12]
[12] This points at another line of argument (in addition to the ones I gave in my previous post) to show why hedonist axiology isn’t universally compelling: To be a good hedonist, someone has to disentangle the part of their brain that cares about short-term pleasure from the part of them that does long-term planning. In doing so, they prove they’re capable of caring about something other than their pleasure. It is now an open question whether they use this disentanglement capability for maximizing pleasure or for something else that motivates them to act on long-term plans.
I love this question! As it happens, I have some rough draft for a post titled something like “’reward is the optimization target for smart RL agents”.
TLDR: I think this is true for some AI systems, but not likely true for any RL-directed AGI systems whose safety we should really worry about. They’ll optimize for maximum reward even more than humans do, unless they’re very carefully built to avoid that behavior.
In the final comment on the second thread you linked, TurnTrout says of his Reward is not the optimization target:
Humans are definitely model-based RL learners at least some of the time—particularly for important decisions.[1] So the claim doesn’t apply to them. I also don’t think it applies to any other capable agent. TurnTrout actually makes a congruent claim in his other post Think carefully before calling RL policies “agents”. Model-free RL algorithms only have limited agency, what I’d call level 1-of-3:
Trained to achieve some goal/reward.
Habitual behavior/model-free RL
Predicts outcomes of actions and selects ones that achieve a goal/reward.
Model-based RL
Selects future states that achieve a goal/reward and then plans actions to achieve that state
No corresponding terminology, (goal-directed from neuroscience applies to levels 2 and even 1[1]) but pretty clearly highly useful for humans
That’s from my post Steering subsystems: capabilities, agency, and alignment.
But humans don’t seem to optimize for reward all that often! They make self-sacrificial decisions that get them killed. And they usually say they’d refuse to get in Nozick’s experience machine, which would hypothetically remove them from this world and give them a simulated world of maximally-rewarding experiences. They’re seeming to optimize for the things that have given them reward, like protecting loved ones, rather than optimizing for reward themselves—just like TurnTrout describes in RINTOT. And humans are model-based for important decisions, presumably using sophisticated models. What gives?
My cognitive neuroscience research focused a lot on dopamine, so I’ve thought a lot about how reward shapes human behavior. The most complete publication is Neural mechanisms of human decision-making as a summary of how humans seem to learn complex behaviors using reward and predictions of reward. But that’s not really very good description of the overall theory, because neuroscientists are highly suspicious of broad theories, and because I didn’t really want to accidentally accelerate AGI research by describing brain function clearly. I know.
I think humans do optimize for reward, we just do it badly. We do see some sophisticated hedonists with exceptional amounts of time and money say things like “I love new experiences”. This has abstracted almost all of the specifics. Yudkowsky’s “fun theory” also describes a pursuit of reward if you grant that “fun” refers to frequent, strong dopamine spikes (I think that’s exactly what we mean by fun). I think more sophisticated hedonists will get in the experience box- but this is complicated by the approximations in human decision-making. It’s pretty likely that the suffering you’d cause your loved ones by getting in the box and leaving them alone would be so salient, and produce such a negative-reward-prediction, that it would outweigh all of the many positive predictions of reward, just based on saliency and our inefficient way of roughly totaling predicted future reward by imagining salient outcomes and roughly averaging over their reward predictions.
So I think the more rational and cognitively capable a human is, the more likely they’ll optimize more strictly and accurately for future reward. And I think the same is true of model-based RL systems with any decent decision-making process.
I realize this isn’t the empirically-based answer you asked for. I think the answer has to be based on theory, because some systems will and some won’t optimize for reward. I don’t know the ML RL literature nearly as well as I know the neuroscience RL literature, so there might be some really relevant stuff out there I’m not aware of. I doubt it, because this is such an AI-safety question.[2]
So that’s why I think reward is the optimization target for smart RL agents.
Edit: Thus, RINTOT and similar work has, I think, really confused the AGI safety debate by making strong claims about current AI that don’t apply at all to the AGI we’re worried about. I’ve been thinking about this a lot in the context of a post I’d call “Current AI and alignment theory is largely behaviorist. Expect a cognitive revolution”.
For more than you want to know about the various terminologies, see How sequential interactive processing within frontostriatal loops supports a continuum of habitual to controlled processing.
We debated the terminologies habitual/goal-directed, automatic and controlled, system 1/system 2, and model-free/model-based for years. All of them have limitations, and all of them mean slightly different things. In particular, model-based is vague terminology when systems get more complex than simple RL—but it is very clear that many complex human decisions (certainly ones in which we envision possible outcomes before taking actions) are far on the model-based side, and meet every definition.
One follow-on question is whether RL-based AGI will wirehead. I think this is almost the same question as getting into the experience box—except that that box will only keep going if the AGI engineers it correctly to keep going. So it’s going to have to do a lot of planning before wireheading, unless its decision-making algorithm is highly biased toward near-term rewards over long-term ones. In the course of doing that planning, its other motivations will come into play—like the well-being of humans, if it cares about that. So whether or not our particular AGI will wirehead probably won’t determine our fate.
You might be interested in an earlier discussion on whether “humans are a hot mess”: https://www.lesswrong.com/posts/SQfcNuzPWscEj4X5E/the-hot-mess-theory-of-ai-misalignment-more-intelligent https://www.lesswrong.com/posts/izSwxS4p53JgJpEZa/notes-on-the-hot-mess-theory-of-ai-misalignment
I’d also accept neuroscience RL literature, and also accept theories that would make useful predictions or give conditions on when RL algorithms optimize for the reward, not just empirical results.
At any rate, I’d like to see your post soon.
That’s probably as much of that post as I’ll get around to. It’s not high on my priority list because I don’t see how it’s a crux for any important alignment theory. I may cover what I think is important about it in the “behaviorist...” post.
Edit: I was going to ask why you were thinking this was important.
It seems pretty cut and dried; even TurnTrout wasn’t claiming this was true beyond model-free RL. I guess LLMs are model-free, so that’s relevant. I just expect them to be turned into agents with explicit goals, so I don’t worry much about how they behave in base form.
FWIW, I strongly disagree with this claim. I believe they are model-based, with the usual datasets & training approaches, even before RLHF/RLAIF.
What do you mean by “model-based”?
Interesting. There’s certainly a lot going on in there, and some of it very likely is at least vague models of future word occurrences (and corresponding events). The definition of model-based gets pretty murky outside of classic RL, so it’s probably best to just directly discuss what model properties give rise to what behavior, e.g. optimizing for reward.
Model-free systems can produce goal-directed behavior. The do this if they have seen some relevant behavior that achieves a given goal, and their input or some internal representation includes the current goal, and they can generalize well enough to apply what they’ve experienced to the current context. (This is by the neuroscience definition of habitual vs goal-directed: behavior changes to follow the current goal, usually hungry, thirsty or not).
So if they’re strong enough generalizers, I think even a model-free system actually optimizes for reward.
I think the claim should be stronger: for a smart enough RL system, reward is the optimization target.
IMO, the important crux is whether we really need to secure the reward function from wireheading/tampering, because a RL algorithm optimizing for the reward means you will need to have much more security/make much more robust reward functions than in the case where RL algorithms don’t optimize for the reward, because optimization amplifies problems and solutions.
Ah yes. I agree that the wireheading question deserves more thought. I’m not confident that my answer to wireheading applies to the types of AI we’ll actually build—I haven’t thought about it enough.
FWIW the two papers I cited are secondary research, so they branch directly into a massive amount of neuroscience research that indirectly bears on the question in mammalian brains. None of it I can think of directly addresses the question of whether reward is the optimization target for humans. I’m not sure how you’d empirically test this.
I do think it’s pretty clear that some types of smart, model-based RL agents would optimize for reward. Those are the ones that a) choose actions based on highest estimated sum of future rewards (like humans seem to, very very approximately), and that are smart enough to estimate future rewards fairly accurately.
LLMs with RLHF/RLAIF may be the relevant case. They are model-free by TurnTrout’s definition, and I’m happy to accept his use of the terminology. But they do have a powerful critic component (at least in training—I’m not sure about deployment, but probably there too)0, so it seems possible that it might develop a highly general representation of “stuff that gives the system rewards”. I’m not worried about that, because I think that will happen long after we’ve given them agentic goals, and long after they’ve developed a representation of “stuff humans reward me for doing”—which could be mis-specified enough to lead to doom if it was the only factor.
If this is true at all, it’s not going to be a very strong effect, meaning you can find very rational and cognitively capable people who do the opposite of this in decision situations that directly pit reward against the things they hold most dearly. (And it may not be true because a lot of personal hedonists tend to “lack sophistication,” in the sense that they don’t understand that their own feelings of valuing nothing but their own pleasure is not how everyone else who’s smart experiences the world. So, there’s at least a midwit level of “sophistication” where hedonists seem overrepresented.)
Maybe it’s the case that there’s a weak correlation that makes the quote above “technically accurate,” but that’s not enough to speak of reward being the optimization target. For comparison, even if it is the case that more intelligent people prefer classical music over k-pop, that doesn’t mean classical music is somehow inherently superior to k-pop, or that classical music is “the music taste target” in any revealing or profound sense. After all, some highly smart people can still be into k-pop without making any mistake.
I’ve written about this extensively here and here. Some relevant exercepts from the first linked post:
It seems we get quite easily addicted to things, which is a form of wireheading. Not just to drugs, but also to various apps and websites.
I have also notice this ;)