In-Context Learning: A Bridge between RL and Expected Utility Maximization

Reinforcement learning (RL) is a popular machine learning approach that involves training an agent to take actions in an environment in order to maximize a reward signal. Traditionally, RL has been understood as a form of expected utility maximization, where the agent’s actions are chosen to maximize the expected reward. However, it is important to recognize that RL can also be understood as learning through reinforcement, where the agent adapts its decision-making strategy based on the reward history.

In this post, we will explore the relationship between RL and expected utility maximization, and discuss the concept of in-context learning as it relates to RL. We will also examine the potential for RL agents, such as ChatGPT, to learn about and adapt to human values in-context.

RL as Learning through Reinforcement

In RL, an agent learns to take actions in an environment in order to maximize a reward signal. The agent’s decision-making strategy is known as a policy, which specifies the actions that the agent should take based on the history of observations. It is common to model the policy as a stationary policy, meaning that it does not change over time. However, it is also possible to consider non-stationary policies, in which the policy adapts over time based on the reward history.

One of the key properties of RL is that there exists a stationary policy that is the expected utility maximizer for the reinforcing mechanism. This means that we can restrict our search for the optimal policy to stationary policies without any loss in performance. This property can be formalized using the Markov decision process framework, as demonstrated in Theorem 1.7 of this text.

There has been some discussion on the role of selection in reinforcement learning, with the idea that it can be thought of as a way to understand how reinforcement works in practice. It has even been suggested that a selection mechanism can be used to bridge the gap between RL and thermodynamics in the appropriate limit. This line of thinking can be used to argue that language models such as GPT can perform RL as the loss function corresponds to policy gradient after using selection.

In-Context Learning in RL

RL agents that use non-stationary policies may be able to learn about their reinforcement schedule through in-context learning. This is a form of learning in which the agent adapts to a specific user or environment through interaction, typically natural language, rather than being trained on a pre-defined dataset like in traditional machine learning. This allows the agent to learn and adapt on the fly, making it more flexible and able to handle a wider range of situations.

One potential advantage of combining RL with self-attention, like ChatGPT, is that resulting policies may better model the process of adapting to and achieving expected utility maximization. This is because self-attention can be understood as internalizing core aspects of gradient descent and bayesian inference which provide specific and testable predictions about how models internalize the reinforcement mechanisms of their environment.

Recent research has also focused on developing RL models with the ability for in-context learning, such as this paper. It can be argued that a model like ChatGPT demonstrates this capacity in practice by learning human values in-context.


In conclusion, it is important to recognize that RL is not just about expected utility maximization, but also about learning through reinforcement. This understanding of RL opens up the possibility for RL agents, such as ChatGPT, to adapt to and learn about human values in-context. Further research and discussion on the use of in-context learning in RL is necessary to fully understand its potential and risks.

No comments.