[Book Review] “The Alignment Problem” by Brian Christian

I came to this book with an ax to grind. I combed through page after page for factual errors, minor misrepresentations or even just contestable opinions. I spotted (what seemed like) omission after omission only to be frustrated just a few pages later when Brian Christian addressed them. In the Chapter 5: Shaping I thought I found a major mistake. Brian Christian addressed Skinnerian operant conditioning without addressing the real way we manages human groups: leading by example.

That’s because he dedicated all of Chapter 7: Imitation to the subject. Thus, through gritted teeth, I reluctantly acknowledge that The Alignment Problem by Brian Christian is a fantastic book in all respects.

Despite my best efforts, Brian Christian even taught me lots of cool things about state-of-the-art machine learning. The Alignment Problem addresses advanced technical problems while being readable to non-technical people. This book would be a useful read both for activists who want to better understand public policy AND for aspiring engineers who want to get up to speed with machine learning. The only possible fault I can imagine with this book is that, since it depends so heavily on cutting-edge research, it might be rendered obsolete in a decade or two. Much of it mirrors the actual technical work I’m doing in machine learning.

The book starts with practical real-world problem that are happening right now. Most of the book is dedicated to explaining machine learning problems and their solutions. At the end it extrapolates on to the choices machine learning creates for our future.

Racist Machines

In 2015 Google famously released an image classifier that labelled Black people as “gorillas” because there were so few Black people in its training dataset. The good solution is to add more Black people to the training dataset. The fast solution is to keep using a biased algorithm and just cover up the most egregious errors. I don’t know which approach Google went with but “three years later, in 2018, Wired reported that the label ‘gorilla’ was still manually deactivated in Google Photos.”

I’m curious what animal I would get classified as if people who look like me were removed from Google Photos training dataset. (I hope it’s a meerkat.) Alas, computer algorithms are already being used to make much more consequential decisions. Many of these decisions involve problems that can’t be solved just by collecting more training data.

The Mathematics of Social Justice

[This section refers to situations in the United States unless otherwise noted.]

Black people and White people self-report similar marijuana usage. However, Black people are arrested for marijuana usage much more frequently than White people. Suppose you are designing an algorithm to determine how much to punish a prisoner for smoking marijuana. If you ignore the prisoner’s race then you will inflict penalties several times harsher on Black people than on White people. Race blindness produces racist outcomes.

Suppose we factor in race to create fair outcomes on average. A White citizen arrested for smoking pot gets punished 7× per use compared to a Black citizen. This is unfair to White people who are now treated harsher on the basis of skin color. We are balancing two mutually-exclusive values. Either we can punish Black people unfairly and/​or we can punish White people unfairly. We cannot be simultaneously fair to both groups.

The solution to the above problem is “stop arresting Black people disproportionately for the same crime” but that only solves the problem for crimes where different races have the same base rates. What happens when Black people actually do commit more crimes?

Suppose we’re designing a system to decide which convicts to offer parole. Correctly predicting who will go on to commit more crimes prevents crime. We want our algorithm to be as accurate as possible. Race has predictive value. If we ignore race, violent crimes will be committed against innocent people. We want the algorithm to be as fair as possible. We also want it to treat people of different races the same way.

It is mathematically impossible to satisfy both criteria simultaneously. If we maximize accuracy then Black people will be offered parole less frequently than White people with identical criminal records. If we maximize racial fairness then we lose predictive accuracy; White people have to commit fewer crimes than their Black counterparts to earn equivalent treatment.

In both cases we need to know peoples’ races.

  • Race correlates with recidivism. If you want to minimize recidivism then it’s useful to know potential parolees’ races.

  • Race correlates with lots of other measurable attributes. If we want to be race-blind it’s not enough to just erase the “race” column from our training data. We must control for everything that correlates with race. Knowing peoples’ races is a prerequisite to designing a race-blind system.

“The most robust fact in the research area,” [Moritz] Hardt says, “is that fairness through blindness doesn’t work. That’s the most established and most robust fact in the entire research area.”

I like that computer algorithms are imposing racist judgments on vulnerable populations. Algorithms may be hard to debug but at least they are possible to debug. For thousands of years we have relied on the whims of human judges. Upgrading from opaquely racist humans to transparently racist algorithms is a giant step forward for society!

Embedding Human Values

Everything we’ve covered so far refers to simple algorithms. What if we just told the computer “do the right thing”? The phrase “do the right thing” is written in English. English is sexist. You can observe the sexism of English by dumping English text into word2vec and then doing arithmetic on it.

This is another manifestation of the accuracy-vs-fairness problem we observer earlier. Nurses are overwhelmingly female. Either a language reflects this or it doesn’t. If the language reflects the fact that nurses are mostly female then your language is sexist. If your language ignores real-world phenomena then your map deviates from the territory.

Transparency

After explaining the societal tradeoffs of algorithms, Brian Christian writes a chapter about interpretability in machine learning. The core idea is that it’s useful to have tools that show us what’s going on inside machine learning algorithms. Chapter 3: Transparency is the best machine learning textbook I’ve read in a long time, which is weird because it includes neither equations nor computer code.

Here is the most important paragraph.

When Yahoo’s vision team open-sourced a model used to detect whether an uploaded image was pornographic, UC Davis PhD student Gabriel Goh used this generative method to tune static into shapes the network regarded as maximally “not safe for work.” The result was like pornography by Salvador Dalí. If you optimize for some combination of the obscenity filter and normal ImageNet category labels―for instance, volcanoes―you get, in this case, obscene geography: what look like giant granite phalluses, ejaculating clouds of volcanic ash.

Brian Christian is such a tease. He doesn’t reproduce any of the images that go with this description! The original paper has even been taken down from GitLab. A backup is available here [NSFW] on Archive.org.

You’re welcome.

Reinforcement Learning

Having already turned social justice into mathematics and explained how to debug a neural network, Brian Christian explains operant conditioning as it applies to machine learning. This chapter is weak in the sense that it is does not comprehensively explain the entire art of animal training (for that, read Don’t Shoot the Dog by Karen Prior). The Alignment Problem isn’t the best introductory psychology textbook I’ve ever read. But it’s not the worst either. Which is impressive for a book about machines. It’s not trying to be an introductory psychology textbook.

Animals are good at reinforcement learning. Machines aren’t because feedback is “terse”, “not especially constructive” and “delayed”―all of which throws a wrench in stocastic gradient descent. The delay especially makes stocastic gradient descent difficult because it feeds a combinatorial explosion in hypothesis space. We’re not sure what to use instead because we don’t know how the brain works.

Brian Christian seems to subscribe to the idea that the biological brain is based around reinforcement learning against deviations from expectation. I agree. The fact animals are reinforcement learners is observable just from black-box behaviorist experiments. The “deviations from expectation” is a synonym for predictive processing. All of this is just setup for the thesis of The Alignment Problem. What happens when you implement sample efficient reinforcement learning in a powerful machine intelligence.

Reinforcement learning in its classical form takes for granted the structure of the rewards in the world and asks the question of how to arrive at the behavior―the “policy”―that maximally reaps them. But in many ways this obscures the more interesting―and more dire―matter that faces us at the brink of AI. We find ourselves rather more interested in the exact opposite of this question: Given the behavior we want from our machines, how do we structure the environment’s rewards to bring that behavior about? How do we get what we want when it is we who sit in the back of the audience, in the critic’s chair―we who administer the food pellets, or their digital equivalent?

This is the alignment problem, in the context of a reinforcement learner.

Brian Christian does a good job of stating the alignment problem as it is understood by the mainstream AI Safety community (insofar as “AI Safety” can be considered “mainstream”).

If you want an employee to behave creatively and ethically, just using punishments and rewards doesn’t work. A human, a cat or even a machine will try hack your system. Operant conditioning is just one tool in the toolbox. If we want to build a superintelligence we need more robust tools.

Curiosity

Dogs and dolphins trained by human beings are motivated by more than just treats. They’re bored. Plus, dogs like making people happy.

Reinforcement learning doesn’t work if the rewards are too infrequent. One way to get around this is to manually shape behavior (which is fraught with risks). Another way is to program curiosity. This chapter rounds out the extremism of the Skinnerian chapter. Brian Christian conceives of intrinsic motivation as “novelty or surprise or some other related scheme”.

This bit is really cool. You can get extremely powerful videogame-playing AIs when you motivate them solely by novelty and ignore score entirely because dying in a videogame reverts you to the start screen which is boring.

If this sounds like how human children play videogames…. Well, yeah.

Imitation

The other way to get a machine to do what you want is to do it yourself and tell the machine to copy you…like a human child.

Corrigibility

Corrigibility is the quality of human being being able to pull the plug on a machine. Corrigibility is easy enough to build for machines with constrained views of reality. It is hard to maintain corrigibility for intelligent or superintelligent agents attempting to optimize the real world because human interference is an obstacle for a machine to route around. This is the heart of the alignment problem so popular on Less Wrong.

The ultimate solution is “a machine which embodies human preferences”. This is is hard both on technical grounds and philosophical grounds. Human values are complex, under-specified, built on top of ontological prejudices, produce mathematically irreconcilable internal contradictions within individual people and differ between various people. They evolve over time. Plus there’s the issue of moral uncertainty. Add on the technical puzzles of building a machine to embody those ideas and the challenge is barely on the side of possible.

But if we can do it we will have brought God to a godless universe.

Credits

This post was funded by Less Wrong. Thank you!