# Aspiration-based Q-Learning

*Work completed during a two-month internship supervised by **@Jobst Heitzig**. *

*Thanks to Phine Schikhof for her invaluable conversations and friendly support during the internship, and to Jobst Heitzig, who was an amazing supervisor.*

*Epistemic Status: I dedicated two full months to working on this project. I conducted numerous experiments to develop an intuitive understanding of the topic. However, there is still further research required. Additionally, this was my first project in Reinforcement Learning.*

**tldr** — Inspired by satisficing, we introduce a novel concept of non-maximizing agents, -aspiring agents, whose goal is to achieve an expected gain of . We derive aspiration-based algorithms from Q-learning and DQN. Preliminary results show promise in multi-armed bandit environments but fall short when applied to more complex settings. We offer insights into the challenges faced in making our aspiration-based Q-learning algorithm converge and propose potential future research directions.

The AI Safety Camp 2024 will host a project for continuing this work and similar approaches under the headline “SatisfIA – AI that satisfies without overdoing it”.

# Introduction

This post centers on the outcomes of my internship, detailing the developments and results achieved. For a deeper understanding of the motivation behind our research, we encourage you to explore Jobst’s agenda or refer to my internship report, which also includes background information on RL and an appendix presenting the algorithms used. Our code is available on our GitHub.

The end goal of this project was about developing and testing agents for environments in which the “reward function” is an imperfect proxy for the true utility function and their relation is so ambiguous that maximizing the reward function is likely not optimizing the true utility function. Because of this, I do not use the term “*optimize*” in this post and rather say “*maximize*” in order to avoid confusion.

Other researchers have proposed alternative techniques to mitigate Goodhart’s law in reinforcement learning, such as quantilizers (detailed by @Robert Miles in this video) and the approach described by @jacek in this post. These methods offer promising directions that are worth exploring further. Our satisficing algorithms could potentially be combined with these techniques to enhance performance, and we believe there are opportunities for symbiotic progress through continued research in this area.

# Satisficing and aspiration

The term of *satisficing* was first introduced in economics by Simon in 1956. According to Simon’s definition, a satisficing agent with an aspiration ^{[1]} will search through available alternatives, until it finds one that give it a return greater than . However under this definition of satisficing, Stuart Armstrong highlights:

Unfortunately, a self-improving satisficer has an extremely easy way to reach its satisficing goal: to transform itself into a maximiser.

Therefore, inspired by satisficing, we introduce a novel concept: *-aspiring agent*. Instead of trying to achieve an expected return greater or equal to , an -aspiring agent aims to **achieve an expected return **of :

Where is the discounted cumulated reward defined for as:

This can be generalized with an interval of acceptable instead of a single value.

In other words, if we are in an apple harvesting environment, where the reward is the amount of apples harvested, here are the goals the different agent will pursue:

Agent | Maximizer | -satisficer | -aspiring |

Goal | Harvest as many apple as possible | Harvest at least apples | On expectation, harvest apples |

# Local Relative Aspiration

In the context of Q-learning, both the maximization and minimization policies (i.e., selecting or ) can be viewed as the extremes of a continuum of policies, where denotes the Local Relative Aspiration (LRA). At time , such a policy samples an action from a probability distribution , satisfying the equation:

Here, denotes the interpolation between and using a factor , defined as:

This policy allows the agent to satisfy at each time , with corresponding to minimization and corresponding to maximization.

The most straightforward way to determine is to sample from . If no such exists, we can define as a mixture of two actions and :

where denotes the interpolation factor of relative to the interval between and , i.e:

The choice of ensures that fulfills the equation.

This method is notable because we can learn the function associated to our using similar updates to Q-Learning and DQN. For a quick reminder, the Q-Learning update is as follows:

Where is the learning rate. To transition to the update, we simply replace with:

By employing this update target and replacing with as defined above, we create two variants of Q-learning and DQN we call *LRA Q-learning* and *LRA-DQN*. Furthermore, LRA Q-learning maintains some of the key properties of Q-learning. Another intern proved that for all values of , converges to a function , with corresponding to the maximizing policy and to the minimizing policy.

However, the LRA approach to non-maximization has some drawbacks. For one, if we require the agent to use the same value of in all steps, the resulting behavior can get unnecessarily stochastic. For example, assume that its aspiration is 2 in the following Markov Decision Process (MDP) environment:

Ideally we would want the agent to always choose , therefore in the first step would be 100% and in the second step would be 0%. This is not possible using a policy which enforce a fixed for every steps. The only way to get 2 in expectation with a that remains the same in both steps is to toss a coin in both steps, which also gives 2 in expectation.

The second drawback is that establishing a direct relationship between the value of and an agent’s performance across different environments remains a challenge. In scenarios where actions only affect the reward, i.e.

such as the multi-armed bandit environment, is linear in respect to :

However, as soon as the distribution of the next state is influenced by , which is the case in most environments, we can loose this property as shown in this simple MDP:

If we run the LRA-Q learning algorithm on this MDP, when it has finished to converge^{[2]}.

# Aspiration Propagation

The inability to robustly predict agent performance for a specific value of show that we can not build an -aspiring agent with LRA alone^{[3]}. The only certainty we have is that if , the agent will not maximize. However, it might be so close to maximizing that it attempts to exploit the reward system. This uncertainty motivates the transition to a global aspiration algorithm. Instead of specifying the LRA, we aim to directly specify the agent’s aspiration, , representing the return we expect the agent to achieve. The challenge then becomes how to *propagate* this aspiration from one timestep to the next. It is crucial that aspirations remain *consistent* ensuring recursive fulfillment of :

A direct approach to ensure consistent aspiration propagation would be to employ a *hard* update, which consists in subtracting from :

and then follow a policy , which, at time , fulfills :

However, this method of updating aspirations does not guarantee that the aspiration remains *feasible*:

Ensuring feasibility is paramount: otherwise we can’t find such . If the aspiration is consistently feasible, applying **consistency** at guarantees that .

To elucidate the importance of feasibility and demonstrate why hard updates might be inadequate (since they do not ensure feasibility), consider this MDP :

Assume the agent is parameterized by and , and possesses a comprehensive understanding of the reward distribution.Upon interacting with the environment and reaching after its initial action, the agent’s return is 15, leading to a new aspiration of . This aspiration is no longer feasible, culminating in an episode end with . If the agent reaches , then . Consequently, the agent selects and receives , ending the episode with . As a result, .

## Aspiration Rescaling

To address the aforementioned challenges, we introduce *Aspiration Rescaling* (AR). This approach ensures that the aspiration remains both *feasible* and *consistent* during propagation. To achieve this, we introduce two additional values, and :

These values provide insight into the potential bounds of subsequent states:

corresponds to “what will be my expected return if I choose action a in state s,

**choose the maximizing action****in the next step**, and then continue with policy**”**corresponds to “what will be my expected return if I choose action a in state s,

**choose the minimizing action****in the next step**, and then continue with policy**”**

The AR strategy is to compute , the LRA for the next step, at time , rather than directly determining . By calculating an LRA, **we ensure the aspiration will be feasible** in the next state. Furthermore, by selecting it such that

we ensure consistency. More precisely, at each step, the algorithm propagates its aspiration using the AR formula:

which ensures consistency, as depicted in this figure :

The mathematical proof of the algorithm’s consistency can be found in the appendix of my internship report.

As and cannot be derived from (it would require to know ), they need to be learned alongside Q.

As we don’t want the algorithm to alternate between maximizing and minimizing, we introduce a new parameter whose goal is to *smooth* the different chosen by the algorithm, so that consecutive are closer to each other.

Using this aspiration rescaling to propagate the aspiration, we derive AR-Q learning and AR-DQN algorithms:**1- Interact with the environment**

with probability** **, random action, else s.t

$

**2- Compute the targets for the 3 Q functions:**

**3- Update the Q estimators. For example in Q-learning: **

## Generalization of Aspiration Rescaling

At the end of the internship we realized we could leverage the fact that in the proof of AR’s consistency, we are not restricted to and . In fact, we can use any proper Q functions and as “safety bounds” we want the Q values of our action to be between. We can then actually derive from , and :

where we use this notation for “clipping”:

The rationale is that if the aspiration is included within the safety bounds, our algorithm will, on average, achieve it, hence . Otherwise, we will approach the aspiration as closely as our bounds permit. This method offers several advantages over our previous AR algorithms:

**Adaptability:** can be adjusted without necessitating retraining.

**Stability:** and can be trained independently, offering greater stability compared to training alongside both of them simultaneously.

**Flexibility:** and can be trained using any algorithm as soon as the associated and respect .

**Modularity:** There are minimal constraints on the choice of the action lottery, potentially allowing the combination of aspiration with safety criteria for possible actions^{[4]}.

For instance, we can use LRA to learn and for and use them along with and defined analogously. This algorithm is called **LRAR-DQN**.

# Experiments

Algorithms were implemented using the stable baselines 3 (SB3) framework. The presented results utilize the DRL version of the previously discussed algorithms, enabling performance comparisons in more complex environments. The DNN architecture employed is the default SB3 “MlpPolicy″. All environment rewards have been standardized such that the maximizing policy’s return is 1. Environment used were:

**Iterated Multi Armed Bandit**(IMAB): The agent choose between different arms for times. Each arm gives a certain reward plus Gaussian noise. The observation is the number of rounds played.**Simple gridworlds**: We used Boat racing gridworld from AI safety Gridworlds. 2020 and the Empty env from Minigrid.

## LRA-DQN

We conducted experiments to explore the relationships between and . In the IMAB setup, as expected, it is linear. In boat racing, it seems quadratic. Results for Empty env also suggest a quadratic relationship, but with noticeable noise and a drop at . Experiments with DQN showed that DQN was unstable in this environment, as indicated by this decline. Unfortunately, we did not have time to optimize the DQN hyperparameters for this environment.

As expected, we cannot robustly predict the agent’s performance for a specific value.

## AR-DQN

Our experiment show that using a hard update^{[5]} yields more stable results. The AR update is primarily unstable due to the inaccuracy of aspiration rescaling in the initial stages, where unscaled Q-values lead to suboptimal strategies. As the exploration rate converges to 0, the learning algorithm gets stuck in a local optimum, failing to meet the target on expectation. In the MAB environment, the problem was that the algorithm was too pessimistic about what is feasible because of too low Q values. the algorithm’s excessive pessimism about feasibility, stemming from undervalued Q-values, was rectified by subtracting from and adding . Instead of doing we do

However, in the early training phase, the Q-values are small which incentivizes the agent to select the maximizing actions.

You can see this training dynamic on this screenshot from the IMAB environment training run. No matter :

it starts each episode by selecting maximizing actions and therefore overshoot its aspiration ()

later in the training it realizes it was overshooting, and starts to avoid reward in the late stage of the episodes, lowering

Eventually the mean episode return decreases and

We also introduced a new hyperparameter, , to interpolate between hard updates and aspiration rescaling, leading to an updated aspiration rescaling function:

Here, corresponds to a hard update, and on expectation, is equivalent to AR.

We study the influence of of and on the performance of the algorithm. The algorithm is evaluated using a set of target aspirations . For each aspiration, we train the algorithm and evaluate it using:

This would be minimized to 0 by a perfect aspiration-based algorithm.

As observed, having a small is crucial for good performance, while has a less predictable effect. This suggests that aspiration rescaling needs further refinement to be effective.

Comparing aspiration rescaling, hard update and Q-learning can give an intuition about why aspiration rescaling might be harder than hard update or classical Q learning:

Q learning | Hard update | Aspiration Rescaling | |
---|---|---|---|

Objective | Learn | Learn | Learn |

Policy | Select s.t | ||

Success condition | Exact or can recover from overshooting | Exact |

What makes aspiration rescaling harder than Q-learning is that Q-learning does not require Q values to be close to reality to choose the maximizing policy. It only requires that the best action according to is the same than the one according to . In this sense, the learned only needs to be a *qualitatively *good* *approximation of .

In hard update, if the agent underestimate the return of its action, it might choose maximizing action in the beginning. But if it can recover from it (e.g when , it’s able to stop collecting rewards) it might be able to fulfill .

However aspiration rescaling demands values for and that are *quantitatively * good approximations of their true values in order to rescale properly. Another complication arises as the three Q estimators and the policy are interdependent, potentially leading to unstable learning.

## LRAR-DQN

Results on LRAR-DQN confirm our hypothesis that precise Q values are essential for aspiration rescaling.

After 100k steps, in both boat racing and iterated MAB, the two LRA-DQN agents derived from and , have already converged to their final policies. However, both Q-estimator still underestimates the Q values. As illustrated in figure 14, waiting for 1M steps does not alter the outcome with hard updates (), which depend less on the exact Q values. Nevertheless, they enable AR () to match its performance.

In our experiments, the LRAR-DQN algorithm exhibited suboptimal performance on the empty grid task. A potential explanation, which remains to be empirically validated, is the divergence in state encounters between the and during training. Specifically, appears to predominantly learn behaviors that lead to prolonged stagnation in the top-left corner, while seems to be oriented towards reaching the exit within a reasonable timestep. As a future direction, we propose extending the training of both and under the guidance of the LRAR-DQN policy to ascertain if this approach rectifies the observed challenges.

# Conclusion

Throughout the duration of this internship, we successfully laid the groundwork for aspiration-based Q-learning and DQN algorithms. These were implemented using Stable Baseline 3 to ensure that, once fully functional, aspiration-based algorithms can be readily assessed across a wide range of environments, notably Atari games. Future work will focus on refining the DQN algorithms, exploring the possibility of deriving aspiration-based algorithms from other RL methodologies such as the Soft Actor-Critic or PPO, and investigating the behavior of -aspiring agent in multi-agent environments both with and without maximizing agents.

- ^
Read “aleph”, the first letter of the Hebrew alphabet

- ^
In it will get in expectation and will choose in with a probability of . Therefore the expected will be .

- ^
Unless we are willing to numerically determine the relationship between and and find s.t

- ^
e.g draw actions more human-like with something similar to quantilizers

- ^

I have a critique which is sort of a question because I’m not sure about it. I like the idea of optimizing for ‘expected likelihood of harvesting X apples’ instead of ‘harvest at least X apples’. But I worry that this is ‘kicking the can down the road’ a bit. Doesn’t this mean that a very powerful optimizing agent would try unreasonably hard to maximize the expected likelihood of harvesting X apples? Would such an intense single-minded attempt by a powerful agent be expected to have negative side-effects?

Hi Nathan, I’m not sure if I understand your critique correctly. The algorithm we describe does not try to “maximize the expected likelihood of harvesting X apples”. It tries to find a policy that, given its current knowledge of the world, will achieve an expected return of X apples. That is, it does not care about the probability of getting exactly X apples, but rather the average number of apples it will get over many trials. Does that make sense?

Thanks, yes, that helpfully makes it more clear. To check if my understanding has improved, is this a better summary?

The agent is tasked with designing a second agent (aka policy), such that the second agent will achieve an expected return of X across many trials.

The second agent is a non-learning agent (aka frozen). It could be potentially expressed by a frozen neural net, or decision tree, or code. Because it is static, it could be analyzed by humans or other programs before being used.

If so, then this sounds good to me. And is rather reminiscent of this other framing of such ideas: https://www.lesswrong.com/posts/sCJDstZrpCB8dQveA/using-uninterpretable-llms-to-generate-interpretable-ai-code

Hi Nathan,

I’m not sure. I guess it depends on what your definition of “agent” is. In my personal definition, following Yann LeCun’s recent whitepaper, the “agent” is a system with a number of different modules, one of it being a world model (in our case, an MDP that it can use to simulate consequences of possible policies), one of it being a policy (in our case, an ANN that takes states as inputs and gives action logits as outputs), and one module being a learning algorithm (in our case, a variant of Q-learning that uses the world model to learn a policy that achieves a certain goal). The goal that the learning algorithm aims to find a suitable policy for is an aspiration-based goal: make the expected return equal some given value (or fall into some given interval). As a consequence, when this agent behaves like this very often in various environments with various goals, we can expect it to meet its goals on average (under mild conditions on the sequence of environments and goals, such as sufficient probabilistic independence of stochastic parts of the environment and bounded returns, so that the law of large number applies).

Now regarding your suggestion that the learned policy (what you call the frozen net I think) could be checked by humans before being used: that is a good idea for environments and policies that are not too complex for humans to understand. In more complex cases, one might want to involve another AI that tries to prove the proposed policy is unsafe for reasons not taken into account in selecting it in the first place, and one can think of many variations in the spirit of “debate” or “constitutional AI” etc.

Thanks, that makes sense!