Two senses of “optimizer”

The word “op­ti­mizer” can be used in at least two differ­ent ways.

First, a sys­tem can be an “op­ti­mizer” in the sense that it is solv­ing a com­pu­ta­tional op­ti­miza­tion prob­lem. A com­puter run­ning a lin­ear pro­gram solver, a SAT-solver, or gra­di­ent de­scent, would be an ex­am­ple of a sys­tem that is an “op­ti­mizer” in this sense. That is, it runs an op­ti­miza­tion al­gorithm. Let “op­ti­mizer_1” de­note this con­cept.

Se­cond, a sys­tem can be an “op­ti­mizer” in the sense that it op­ti­mizes its en­vi­ron­ment. A hu­man is an op­ti­mizer in this sense, be­cause we ro­bustly take ac­tions that push our en­vi­ron­ment in a cer­tain di­rec­tion. A re­in­force­ment learn­ing agent can also be thought of as an op­ti­mizer in this sense, but con­fined to what­ever en­vi­ron­ment it is run in. This is the sense in which “op­ti­mizer” is used in posts such as this. Let “op­ti­mizer_2” de­note this con­cept.

Th­ese two con­cepts are dis­tinct. Say that you some­how hook up a lin­ear pro­gram solver to a re­in­force­ment learn­ing en­vi­ron­ment. Un­less you do the “hook­ing up” in a par­tic­u­larly cre­ative way there is no rea­son to as­sume that the out­put of the lin­ear pro­gram solver would push the en­vi­ron­ment in a par­tic­u­lar di­rec­tion. Hence a lin­ear pro­gram solver is an op­ti­mizer_1, but not an op­ti­mizer_2. On the other hand, a sim­ple tab­u­lar RL agent would even­tu­ally come to sys­tem­at­i­cally push the en­vi­ron­ment in a par­tic­u­lar di­rec­tion, and is hence an op­ti­mizer_2. How­ever, such a sys­tem does not run any in­ter­nal op­ti­miza­tion al­gorithm, and is there­fore not an op­ti­mizer_1. This means that a sys­tem can be an op­ti­mizer_1 while not be­ing an op­ti­mizer_2, and vice versa.

There are some ar­gu­ments re­lated to AI safety that seem to con­flate these two con­cepts. In Su­per­in­tel­li­gence (pg 153), on the topic of Tool AI, Nick Bostrom writes that:

A sec­ond place where trou­ble could arise is in the course of the soft­ware’s op­er­a­tion. If the meth­ods that the soft­ware uses to search for a solu­tion are suffi­ciently so­phis­ti­cated, they may in­clude pro­vi­sions for man­ag­ing the search pro­cess it­self in an in­tel­li­gent man­ner. In this case, the ma­chine run­ning the soft­ware may be­gin to seem less like a mere tool and more like an agent. Thus, the soft­ware may start by de­vel­op­ing a plan for how to go about its search for a solu­tion. The plan may spec­ify which ar­eas to ex­plore first and with what meth­ods, what data to gather, and how to make best use of available com­pu­ta­tional re­sources. In search­ing for a plan that satis­fies the soft­ware’s in­ter­nal crite­rion (such as yield­ing a suffi­ciently high prob­a­bil­ity of find­ing a solu­tion satis­fy­ing the user-speci­fied crite­rion within the al­lot­ted time), the soft­ware may stum­ble on an un­ortho­dox idea. For in­stance, it might gen­er­ate a plan that be­gins with the ac­qui­si­tion of ad­di­tional com­pu­ta­tional re­sources and the elimi­na­tion of po­ten­tial in­ter­rupters (such as hu­man be­ings).

To me, this ar­gu­ment seems to make an un­ex­plained jump from op­ti­mizer_1 to op­ti­mizer_2. It be­gins with the ob­ser­va­tion that a pow­er­ful Tool AI would be likely to op­ti­mize its in­ter­nal com­pu­ta­tion in var­i­ous ways, and that this op­ti­miza­tion pro­cess could be quite pow­er­ful. In other words, a pow­er­ful Tool AI would be a strong op­ti­mizer_1. It then con­cludes that the sys­tem might start pur­su­ing con­ver­gent in­stru­men­tal goals – in other words, that it would be an op­ti­mizer_2. The jump be­tween the two is not ex­plained.

The im­plicit as­sump­tion seems to be that an op­ti­mizer_1 could turn into an op­ti­mizer_2 un­ex­pect­edly if it be­comes suffi­ciently pow­er­ful. It is not at all clear to me that this is the case – I have not seen any good ar­gu­ment to sup­port this, nor can I think of any my­self. The fact that a sys­tem is in­ter­nally run­ning an op­ti­miza­tion al­gorithm does not im­ply that the sys­tem is se­lect­ing its out­put in such a way that this out­put op­ti­mizes the en­vi­ron­ment of the sys­tem.

The ex­cerpt from Su­per­in­tel­li­gence is just one ex­am­ple of an ar­gu­ment that seems to slide be­tween op­ti­mizer_1 and op­ti­mizer_2. For ex­am­ple, some parts of Dreams of Friendli­ness seem to be do­ing so, or at least it’s not always clear which of the two is be­ing talked about. I’m sure there are more ex­am­ples as well.

Be mind­ful of this dis­tinc­tion when rea­son­ing about AI. I pro­pose that “con­se­quen­tial­ist” (or per­haps “goal-di­rected”) is used to mean what I have called “op­ti­mizer_2”. I don’t think there is a need for a spe­cial word to de­note what I have called “op­ti­mizer_1” (at least not once the dis­tinc­tion be­tween op­ti­mizer_1 and op­ti­mizer_2 has been pointed out).


Note: It is pos­si­ble to raise a sort of em­bed­ded agency-like ob­jec­tion against the dis­tinc­tion be­tween op­ti­mizer_1 and op­ti­mizer_2. One might ar­gue that:

There is no sharp bound­ary be­tween the in­side and the out­side of a com­puter. An “op­ti­mizer_1” is just an op­ti­mizer whose op­ti­miza­tion tar­get is defined in terms of the state of the com­puter it is in­stalled on, whereas an “op­ti­mizer_2” is an op­ti­mizer whose op­ti­miza­tion tar­get is defined in terms of some­thing out­side the com­puter. Hence there is no cat­e­gor­i­cal differ­ence be­tween an op­ti­mizer_1 and an op­ti­mizer_2.

I don’t think that this ar­gu­ment works. Con­sider the fol­low­ing two sys­tems:

  • A com­puter that is able to very quickly solve very large lin­ear pro­grams.

  • A com­puter that solves lin­ear pro­grams, and tries to pre­vent peo­ple from turn­ing it off as it is do­ing so, etc.

Sys­tem 1 is an op­ti­mizer_1 that solves lin­ear pro­grams, whereas sys­tem 2 is an op­ti­mizer_2 that is op­ti­miz­ing the state of the com­puter that it is in­stalled on. Th­ese two things are differ­ent. (More­over, the differ­ence isn’t just that sys­tem 2 is “more pow­er­ful” than sys­tem 1 – sys­tem 1 might even be a bet­ter lin­ear pro­gram solver than sys­tem 2.)


Ac­knowl­edge­ments: We were aware of the differ­ence be­tween “op­ti­mizer_1” and “op­ti­mizer_2″ while work­ing on the mesa-op­ti­miza­tion pa­per, and I’m not sure who first pointed it out. We were also prob­a­bly not the first peo­ple to re­al­ise this.