Making a Difference Tempore: Insights from ‘Reinforcement Learning: An Introduction’

The safety of artificial intelligence applications involving reinforcement learning is a topic that deserves careful attention.

Foreword

Let’s get down to business.

Reinforcement Learning

1: Introduction

2: Multi-armed Bandits

Bandit basics, including nonstationarity, the value of optimism for incentivizing exploration, and upper-confidence-bound action selection.

Some explore/​exploit results are relevant to daily life—I highly recommend reading Algorithms to Live By: The Computer Science of Human Decisions.

3: Finite Markov Decision Processes

The framework.

4: Dynamic Programming

Policy evaluation, policy improvement, policy iteration, value iteration, generalized policy iteration. What a nomenclative nightmare.

5: Monte Carlo Methods

Prediction, control, and importance sampling.

Importance Sampling

After gathering data with our behavior policy , we then want to approximate the value function for the target policy . In off-policy methods, the policy we use to gather the data is different from the one whose value we’re trying to learn; in other words, the distribution of states we sample is different. This gives us a skewed picture of , so we must overcome this bias.

If can take all of the actions that can (i.e., ), we can overcome by adjusting the return of taking a series of actions using the importance-sampling ratio . This cleanly recovers by the definition of expectation:

Then after observing some set of returns (where are the relevant returns for state ), we define the state’s value as the average adjusted observed return

However, the ’s can be arbitrarily large (suppose and ; ), so the variance of this estimator can get pretty big. To get an estimator whose variance converges to 0, try

Death and Discounting

Instead of viewing the discount factor as measuring how much we care about the future, think of it as encoding the probability we don’t terminate on any given step. That is, we expect with probability to die on the next turn, so we discount rewards accordingly.

This intuition hugs pre-theoretic understanding much more closely; if you have just 80 years to live, you might dive that big blue sky. If, however, you imagine there’s a non-trivial chance that humanity can be more than just a flash in the pan, you probably care more about the far future.

6: Temporal-Difference Learning

The tabular triple threat: , , and -learning.

Learning TD Learning

  • is one-step bootstrapping of state values . It helps you learn the value of a given policy, and is not used for control.

  • is on-policy one-step bootstrapping of action values using the quintuples .

As in all on-policy methods, we continually estimate for the policy , and at the same time change toward greediness with respect to .
  • -learning is off-policy one-step bootstrapping of action values .

    • You take an action using and then use the maximal action value at the next state in your TD target term.

imization bias

With great branching factors come great biases, and optimistic bias is problematic for -learning.

Refresher: the -learning update rule for state , action , and new state is

Suppose the rewards for all 100 actions at are distributed according to . All of these actions have a true (expected) value of 0, but the probability that none of their estimates are greater than .8 after 1 draw each is . The more actions you have, the higher the probability is that at the maximum is just an outlier. See: regression toward the mean and mutual funds.

To deal with this, we toss another -learner into the mix; at any given update, one has their value updated and the other greedifies the policy. The double -learning scheme works because both -learners are unlikely to be biased in the same way. For some reason, this wasn’t discovered until 2010.

7: -step Bootstrapping

-step everything.

8: Planning and Learning with Tabular Methods

Models, prioritized sweeping, expected vs. sample updates, MCTS, and rollout algorithms.

Roles Models Play

Distribution models include the full range of possible futures and their probabilities. For example, a distribution model for two fair coins: .

Sample models just let you, well, sample. … You could sample thousands of times and gain a high degree of confidence that the above distribution model is generating the data, but this wouldn’t be your only hypothesis (granted, it might have the lowest Kolmogorov complexity).

Note that a distribution model also can function as a sample model.

9: On-policy Prediction with Approximation

We finally hit the good stuff: value-function approximators and stochastic-/​semi-gradient methods.

10: On-policy Control with Approximation

11: Off-policy Methods with Approximation

The of function approximation, bootstrapping, and off-policy training converge to sow d̴iv̢erge͏nc̸e͟ and i͟nstab̕il̡i̡t̷y in your methods. Metal.

12: Eligibility Traces

I was told to skip this chapter; the first half was my favorite part of the book.

uses a backwards-view eligibility trace to update features, which I find elegant. Suppose you have some feature extraction function . Then you apply your TD update not only based on the features relevant at the current state, but also to the time-decaying traces of the features of previously-visited states. sets how quickly this eligibility decay happens; recovers , while recovers Monte-Carlo return methods.

When I was a kid, there was a museum I loved going to—it had all kinds of wacky interactive devices for kids. One of them took sounds and “held them in the air” at a volume which slowly decreased as a function of how long ago the sound was made. State features are sounds, and volume is eligibility.

13: Policy Gradient Methods

The policy gradient theorem, REINFORCE, and actor-critic.

14: Psychology

Creating a partial mapping between reinforcement learning and psychology.

Mental, Territorial

There was a word I was looking for that “mental model” didn’t quite seem to fit: “the model with respect to which we mentally simulate courses of action”. CFAR’s “inner sim” terminology didn’t quite map either, as to me, that points to the system-in-motion more than that-on-which-the-system-runs. The literature dubs this a cognitive map.

Individual place cells within the hippocampus correspond to separate locations in the environment with the sum of all cells contributing to a single map of an entire environment. The strength of the connections between the cells represents the distances between them in the actual environment. The same cells can be used for constructing several environments, though individual cells’ relationships to each other may differ on a map by map basis. The possible involvement of place cells in cognitive mapping has been seen in a number of mammalian species, including rats and macaque monkeys. Additionally, in a study of rats by Manns and Eichenbaum, pyramidal cells from within the hippocampus were also involved in representing object location and object identity, indicating their involvement in the creation of cognitive maps.
Wikipedia, Cognitive map

In light of my work on whitelisting, I’ve been thinking about why we’re so “object-oriented” in our mental lives. An off-the-cuff hypothesis: to better integrate with the rest of our mental models, the visual system directly links up to our object schema. One such object is then recognized and engraved as a discrete “thing” in our map. Hence, we emotionally “know” that the world “really is” made up of objects, and isn’t just a collection of particles.

Most of the information used by people for the cognitive mapping of spaces is gathered through the visual channel (Lynch, 1960).

Lahav and Mioduser’s research somewhat supports this idea, suggesting that blind people not only have lower-fidelity and more declarative (as opposed to procedural /​ interactive) cognitive maps, they’re also less likely to provide object-to-object descriptions.

Epistemic status: I made a not-obviously-wrong hypothesis and found two pieces of corroborating evidence. If anyone who knows more about this wants to chime in, please do!

15: Neuroscience

The reward prediction error hypothesis, dopamine, neural actor-critic, hedonistic neurons, and addiction.

16: Applications and Case Studies

From checkers to checkmate.

17: Frontiers

Temporal abstraction, designing reward signals, and the future of reinforcement learning. In particular, the idea I had for having a whitelist-enabled agent predict expected object-level transitions is actually one of the frontiers: general value functions. Rad.

Pandora’s AI Boxing

The rapid pace of advances in AI has led to warnings that AI poses serious threats to our societies, even to humanity itself.

This chapter talks a fair amount about the serious challenges in AI alignment (not sure if you all have heard of that problem), which is heartening.

As to safety, hazards possible with reinforcement learning are not completely different from those that have been managed successfully for related applications of optimization and control methods.

I’m not sure about that. Admittedly, I’m not as familiar with those solutions, but the challenge here seems to be of a qualitatively different caliber. Conditional on true AI’s achievement, we’d want to have extremely high confidence that it pans out before we flip the switch. The authors acknowledge that

it may be impossible for the agent to achieve the designer’s goal no matter what its reward signal is.

I don’t think it’s impossible, but it’s going to be extremely hard to get formal probabilistic guarantees. I mean, if you don’t know an agent’s rationality, you can’t learn their utility function. If you do know their rationality but not their probability distribution over outcomes, you still can’t learn their utility function.

This is just one of the many open problems in alignment theory. If that’s not enough, there are always the unknown unknowns...

Final Thoughts

I read the “nearly-complete” draft of the second edition, available here. It was pretty good, but I did find most of the exercises either too easy or requiring considerable programming investment to set up the environment described. The former makes sense, as I’ve already taken a course on this, and I’m probably a bit above the introductory level.

Some graphs could have been more clearly designed, and some math in the proof of Linear ’s convergence (p. 206-207) is underspecified:

In general, will be reduced toward zero whenever is positive definite, meaning for real vector .

An additional precondition: can’t be the zero vector.

For a key matrix of this type, positive definiteness is assured if all of its columns sum to a nonnegative number.

Unless I totally missed what “this type” entails, this is false if taken at face value:

has nonnegative column sums and is also indefinite, having eigenvalues of 3 and −7.

However, the claim is true in the subtle way they use it—they’re actually showing that since the matrix is symmetric, real, and strictly diagonally dominant with positive diagonal entries, it’s also positive definite. This could be made clearer.

In all, reading this book was definitely a positive experience.

Forwards

I’ll be finishing Analysis II before moving on to Jaynes’s Probability Theory in preparation for a class in the fall.

Dota 2

Recently, OpenAI made waves with their OpenAI Five Dota 2 bot. To REINFORCE what I just learned and solidified, I might make a post in the near future breaking down how Five differs from the Alpha(Go) Zero approach, quantifying my expectations for The International for calibration.

No Longer a Spectator

Four months and one week ago, I started my journey through the MIRI reading list. In those dark days, attempting a proof induced a stupor similar to that I encountered approaching a crush in grade school, my words and thoughts leaving me.

Six textbooks later and with a little effort, I’m able to prove things like the convergence of Monte Carlo integration to the Riemann integral, threading together lessons from All of Statistics and Analysis I; target in mind, words and thoughts remaining firmly by my side.

The rapid pace of advances in artificial intelligence has led to warnings that artificial intelligence poses serious threats to our societies, even to humanity itself. The renowned scientist and artificial intelligence pioneer Herbert Simon anticipated the warnings we are hearing today…
He spoke of the eternal conflict between the promise and perils of any new knowledge, reminding us of the Greek myths of Prometheus, the hero of modern science, who stole fire from the gods for the benefit of mankind, and Pandora, whose box could be opened by a small and innocent action to release untold perils on the world.
Simon urged us to recognize that as designers of our future and not mere spectators, the decisions we make can tilt the scale in Prometheus’ favor.

If you are interested in working together to learn MIRI-relevant math, if you have a burning desire to knock the alignment problem down a peg, if you’re in a scale-tilting mood—I would be more than happy to work with you. Messaging me may also net you an invitation to the MIRIx Discord server.