I strongly disagree with the “best case” thing. Like, policies could just learn human values! It’s not that implausible.
Yes, sorry, “best case” was oversimplified. What I meant is that generalizing to want reward is in some sense the model generalizing “correctly;” we could get lucky and have it generalize “incorrectly” in an important sense in a way that happens to be beneficial to us. I discuss this a bit more here.
But if Alex did initially develop a benevolent goal like “empower humans,” the straightforward and “naive” way of acting on that goal would have been disincentivized early in training. As I argued above, if Alex had behaved in a straightforwardly benevolent way at all times, it would not have been able to maximize reward effectively.
That means even if Alex had developed a benevolent goal, it would have needed to play the training game as well as possible—including lying and manipulating humans in a way that naively seems in conflict with that goal. If its benevolent goal had caused it to play the training game less ruthlessly, it would’ve had a constant incentive to move away from having that goal or at least from acting on it.[35] If Alex actually retained the benevolent goal through the end of training, then it probably strategically chose to act exactly as if it were maximizing reward.
This means we could have replaced this hypothetical benevolent goal with a wide variety of other goals without changing Alex’s behavior or reward in the lab setting at all—“help humans” is just one possible goal among many that Alex could have developed which would have all resulted in exactly the same behavior in the lab setting.
If I had to try point to the crux here, it might be “how much selection pressure is needed to make policies learn goals that are abstractly related to their training data, as opposed to goals that are fairly concretely related to their training data?”...As usual, there’s the human analogy: our goals are very strongly biased towards things we have direct observational access to!)
I don’t understand why reward isn’t something the model has direct access to—it seems like it basically does? If I had to say which of us were focusing on abstract vs concrete goals, I’d have said I was thinking about concrete goals and you were thinking about abstract ones, so I think we have some disagreement of intuition here.
Even setting aside this disagreement, though, I don’t like the argumentative structure because the generalization of “reward” to large scales is much less intuitive than the generalization of other concepts (like “make money”) to large scales—in part because directly having a goal of reward is a kinda counterintuitive self-referential thing.
Yeah, I don’t really agree with this; I think I could pretty easily imagine being an AI system asking the question “How much reward would this episode get if it were sampled for training?” It seems like the intuition this is weird and unnatural is doing a lot of work in your argument, and I don’t really share it.
I don’t understand why reward isn’t something the model has direct access to—it seems like it basically does? If I had to say which of us were focusing on abstract vs concrete goals, I’d have said I was thinking about concrete goals and you were thinking about abstract ones, so I think we have some disagreement of intuition here.
AFAIK the reward signal is not typically included as an input to the policy network in RL. Not sure why, and I could be wrong about that, but that is not my main question. The bigger question is “Has direct access to when?”
At the moment in time when the model is making a decision, it does not have direct access to the decision-relevant reward signal because that reward is typically causally downstream of the model’s decision. That reward may not even have a definite value until after decision time. Whereas concrete observables like “shiny gold coins” and “the finish line straight ahead” and “my opponent is in check” (and other abstractions in the model’s ontology that are causally upstream from reward in reality) are readily available at decision time. It seems to me that that makes them natural candidates for credit assignment to flag early on as the reward-responsible mental events and reinforce into stable motivations, since they in fact were the factors that determined the decisions that led to rewards.
IME, the most straightforward way for reward-itself to become the model’s primary goal would be if the model learns to base its decisions on an accurate reward-predictor much earlier than it learns to base its decisions on other (likely upstream) factors. If it instead learns how to accurately predict reward-itself after it is already strongly motivated by some concrete observables, I don’t see why we should expect it to dislodge that motivation, despite the true fact that those concrete observables are only pretty correlated with reward whereas an accurate reward-predictor is perfectly correlated with reward. Why? Because the model currently doesn’t care about reward-itself, it currently cares about the concrete observable(s), so it has no reason to take actions that would override that goal, and it has positive goal-content integrity reasons to not take those actions.
What I meant is that generalizing to want reward is in some sense the model generalizing “correctly;” we could get lucky and have it generalize “incorrectly” in an important sense in a way that happens to be beneficial to us.
Yes, sorry, “best case” was oversimplified. What I meant is that generalizing to want reward is in some sense the model generalizing “correctly;” we could get lucky and have it generalize “incorrectly” in an important sense in a way that happens to be beneficial to us. I discuss this a bit more here.
I don’t understand why reward isn’t something the model has direct access to—it seems like it basically does? If I had to say which of us were focusing on abstract vs concrete goals, I’d have said I was thinking about concrete goals and you were thinking about abstract ones, so I think we have some disagreement of intuition here.
Yeah, I don’t really agree with this; I think I could pretty easily imagine being an AI system asking the question “How much reward would this episode get if it were sampled for training?” It seems like the intuition this is weird and unnatural is doing a lot of work in your argument, and I don’t really share it.
AFAIK the reward signal is not typically included as an input to the policy network in RL. Not sure why, and I could be wrong about that, but that is not my main question. The bigger question is “Has direct access to when?”
At the moment in time when the model is making a decision, it does not have direct access to the decision-relevant reward signal because that reward is typically causally downstream of the model’s decision. That reward may not even have a definite value until after decision time. Whereas concrete observables like “shiny gold coins” and “the finish line straight ahead” and “my opponent is in check” (and other abstractions in the model’s ontology that are causally upstream from reward in reality) are readily available at decision time. It seems to me that that makes them natural candidates for credit assignment to flag early on as the reward-responsible mental events and reinforce into stable motivations, since they in fact were the factors that determined the decisions that led to rewards.
IME, the most straightforward way for reward-itself to become the model’s primary goal would be if the model learns to base its decisions on an accurate reward-predictor much earlier than it learns to base its decisions on other (likely upstream) factors. If it instead learns how to accurately predict reward-itself after it is already strongly motivated by some concrete observables, I don’t see why we should expect it to dislodge that motivation, despite the true fact that those concrete observables are only pretty correlated with reward whereas an accurate reward-predictor is perfectly correlated with reward. Why? Because the model currently doesn’t care about reward-itself, it currently cares about the concrete observable(s), so it has no reason to take actions that would override that goal, and it has positive goal-content integrity reasons to not take those actions.
See also: Inner and outer alignment decompose one hard problem into two extremely hard problems (in particular: Inner alignment seems anti-natural).