# Optimization Amplifies

I talk here about how a mathematician mindset can be useful for AI alignment. But first, a puzzle:

Given , what is the least number such that for , the base representation of consists entirely of 0s and 1s?

If you want to think about it yourself, stop reading.

For =2, =2.

For =3, =3.

For =4, =4.

For =5, =82,000.

Indeed, 82,000 is 10100000001010000 in binary, 11011111001 in ternary, 110001100 in base 4, and 10111000 in base 5.

What about when =6?

So, a mathematician might tell you that this is an open problem. It is not known if there is any which consists of 0s and 1s in bases 2 through 6.

A scientist, on the other hand, might just tell you that clearly no such number exists. There are numbers that consist of 0s and 1s in base 6. Each of these has roughly digits in base 5, and assuming things are roughly evenly distributed, each of these digits is a 0 or a 1 with “probability” . The “probability” that there is any number of length that has the property is thus less than . This means that as you increase , the “probability” that you find a number with the property drops off exponentially, and this is not even considering bases 3 and 4. Also, we have checked all numbers up to 2000 digits. No number with this property exists.

Who is right?

Well, they are both right. If you want to have fun playing games with proofs, you can consider it an open problem and try to prove it. If you want to get the right answer, just listen to the scientist. If you have to choose between destroying the world with a 1% probability and destroying the world if a number greater than 2 which consists of 0s and 1s in bases 2 through 6 exists, go with the latter.

It is tempting to say that we might be in a situation similar to this. We need to figure out how to make safe AI, and we maybe don’t have that much time. Maybe we need to run experiments, and figure out what is true about what we should do and not waste our time with math. Then why are the folks at MIRI doing all this pure math stuff, and why does CHAI talk about “proofs” of desired AI properties? It would seem that if the end of the world is at stake, we need scientists, not mathematicians.

I would agree with the above sentiment if we were averting an astroid, or a plague, or global warming, but I think it fails to apply to AI alignment. This is because optimization amplifies things.

As a simple example of optimization, let for be i.i.d. random numbers which are normally distributed with mean 0 and standard deviation 1. If I choose an at random, the probability that is greater than 4 is like 0.006%. However, if I optimize, and choose the greatest , the probability that it is greater that 4 is very close to 100%. This is the kind of thing that optimization does. It searches through a bunch of options, and takes extreme ones. This has the effect of making things that would be very small probabilities much larger.

Optimization also leads to very steep phase shifts, because it can send something on one side of a threshold to one extreme, and send things on the other side of a threshold to another extreme. Let for be i.i.d. random numbers that are uniform in the unit interval. If you look at the first 10 numbers and take the one that is furthest away from .499, the distribution over numbers will be bimodal peaks near 0 and 1. If you take the one that is furthest away from .501, you will get a very similar distribution. Now instead consider what happens if you look at all numbers and take the one that is furthest from .499. You will get a distribution that is almost certainly 1. On the other hand, the one that is furthest from .501 will be almost certainly 0. As you slightly change the optimization target, the result of a weak optimization might not change much, but the result of a strong one can change things drastically.

As a very rough approximation, a scientist is good at telling the difference between probability 0.01% and probability 99.99%, while the mathematician is good at telling the difference between 99.99% and 100%. Similarly, the scientist is good at telling if , while the mathematician is good at telling if when you already know that .

If you only want to get an approximately correct answer almost surely, the absence of strong optimization pressure makes the mathematician skills much less useful. However strong optimization pressure amplifies and creates discontinuities, which creates the necessity for a mathematician level of precision even to achieve approximate correctness in practice.

Notes:

1) I am not just saying that adversarial optimization makes small probabilities of failure large. I am saying that in general any optimization at all messes with small probabilities and errors drastically.

2) I am not saying that we don’t need scientists. I am saying that we don’t just need scientists, and I am saying that scientists should pay some attention to the mathematician mindset. There is a lot to be gained from getting your hands dirty in experiments.

3) I am not saying that we should only be satisfied if we achieve certainty that an AI system will be safe. That’s an impossibly high standard. I am saying that we should aim for a deep formal understanding of what is going on, more like the “fully reduced” understanding we have of steam engines or rockets.

- Embedded Agency (full-text version) by 15 Nov 2018 19:49 UTC; 180 points) (
- Worst-case thinking in AI alignment by 23 Dec 2021 1:29 UTC; 161 points) (
- And All the Shoggoths Merely Players by 10 Feb 2024 19:56 UTC; 136 points) (
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- AI Alignment 2018-19 Review by 28 Jan 2020 2:19 UTC; 126 points) (
- An artificially structured argument for expecting AGI ruin by 7 May 2023 21:52 UTC; 91 points) (
- Alignment Newsletter #13: 07/02/18 by 2 Jul 2018 16:10 UTC; 70 points) (
- 27 Nov 2022 11:54 UTC; 59 points) 's comment on Don’t align agents to evaluations of plans by (
- Mathematical Mindset by 11 Jul 2018 3:03 UTC; 54 points) (
- Draft: Introduction to optimization by 26 Mar 2023 17:25 UTC; 43 points) (
- Four Motivations for Learning Normativity by 11 Mar 2021 20:13 UTC; 42 points) (
- 10 posts I like in the 2018 Review by 11 Jan 2020 2:23 UTC; 31 points) (
- My Updating Thoughts on AI policy by 1 Mar 2020 7:06 UTC; 20 points) (
- 28 Jan 2020 2:33 UTC; 19 points) 's comment on 2018 Review: Voting Results! by (
- Alignment Newsletter #31 by 5 Nov 2018 23:50 UTC; 17 points) (
- 27 Jun 2018 2:08 UTC; 16 points) 's comment on Announcement: AI alignment prize round 2 winners and next round by (
- [AN #76]: How dataset size affects robustness, and benchmarking safe exploration by measuring constraint violations by 4 Dec 2019 18:10 UTC; 14 points) (
- 3 Aug 2018 21:58 UTC; 13 points) 's comment on Alignment Newsletter #13: 07/02/18 by (
- 27 Mar 2020 17:41 UTC; 4 points) 's comment on Alignment as Translation by (
- 6 Aug 2019 18:36 UTC; 3 points) 's comment on AI Alignment Open Thread August 2019 by (
- 19 Feb 2021 1:15 UTC; 2 points) 's comment on Recursive Quantilizers II by (

I think the simple mathematical models here are very helpful in pointing to some intuitions about being confident systems will work even with major optimisation pressure applied, and why optimisation power makes things weird. I would like to see other researchers in alignment review this post, because I don’t fully trust my taste on posts like these.

I don’t like the intro to the post. I feel like the example Scott gives makes the opposite of the point he wants it to make. Either a number with the given property exists or not. If such a number doesn’t exist, creating a superintelligence won’t change that fact. Given talk I’ve heard around the near certainty of AI doom, betting the human race on the nonexistence of a number like this looks pretty attractive by comparison—and it’s plausible there are AI alignment bets we could make that are analogous to this bet.

One of the main explanations of the AI alignment problem I link people to.