RSS

Optimization

TagLast edit: 30 Sep 2020 19:18 UTC by Ruby

An optimization process is any kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process moves the world into a specific and unexpected set of states by searching through a large search space, hitting small and low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it prefers that state.

The best way to exemplify an optimization process is through a simple example: Eliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations.

Consider the human being. We are a highly complex object with a low probability to have been created by chance—natural selection, however, over millions of years, built up the infrastructure needed to build such a functioning body. This body, as well as other organisms, had the chance (was selected) to develop because it is in itself a rather efficient replicator suitable for the environment where it came up.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can’t do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolution, on the other hand, are more domain-general optimization processes than Deep Blue, but that doesn’t mean they’re more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it’s not obvious what it would mean for “evolution” to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

Measuring Optimization Power

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can’t apply the math directly.

Further Reading & References

See also

The ground of optimization

Alex Flint20 Jun 2020 0:38 UTC
199 points
74 comments27 min readLW link1 review

Optimization

Eliezer Yudkowsky13 Sep 2008 16:00 UTC
43 points
45 comments5 min readLW link

Op­ti­miza­tion Amplifies

Scott Garrabrant27 Jun 2018 1:51 UTC
96 points
12 comments4 min readLW link

Mea­sur­ing Op­ti­miza­tion Power

Eliezer Yudkowsky27 Oct 2008 21:44 UTC
57 points
34 comments6 min readLW link

Selec­tion vs Control

abramdemski2 Jun 2019 7:01 UTC
137 points
25 comments11 min readLW link2 reviews

Risks from Learned Op­ti­miza­tion: Introduction

31 May 2019 23:44 UTC
156 points
42 comments12 min readLW link3 reviews

Good­hart’s Curse and Limi­ta­tions on AI Alignment

G Gordon Worley III19 Aug 2019 7:57 UTC
22 points
18 comments9 min readLW link

DL to­wards the un­al­igned Re­cur­sive Self-Op­ti­miza­tion attractor

jacob_cannell18 Dec 2021 2:15 UTC
32 points
22 comments4 min readLW link

Thoughts and prob­lems with Eliezer’s mea­sure of op­ti­miza­tion power

Stuart_Armstrong8 Jun 2012 9:44 UTC
35 points
23 comments5 min readLW link

What is op­ti­miza­tion power, for­mally?

sbenthall18 Oct 2014 18:37 UTC
17 points
16 comments2 min readLW link

Math­e­mat­i­cal Mea­sures of Op­ti­miza­tion Power

Alex_Altair24 Nov 2012 10:55 UTC
7 points
16 comments5 min readLW link

Op­ti­miza­tion Provenance

Adele Lopez23 Aug 2019 20:08 UTC
38 points
5 comments5 min readLW link

Two senses of “op­ti­mizer”

Joar Skalse21 Aug 2019 16:02 UTC
35 points
41 comments3 min readLW link

The Op­ti­mizer’s Curse and How to Beat It

lukeprog16 Sep 2011 2:46 UTC
85 points
82 comments3 min readLW link

Is the term mesa op­ti­mizer too nar­row?

Matthew Barnett14 Dec 2019 23:20 UTC
37 points
21 comments1 min readLW link

Mesa-Op­ti­miz­ers vs “Steered Op­ti­miz­ers”

Steven Byrnes10 Jul 2020 16:49 UTC
45 points
6 comments8 min readLW link

Mesa-Op­ti­miz­ers and Over-op­ti­miza­tion Failure (Op­ti­miz­ing and Good­hart Effects, Clar­ify­ing Thoughts—Part 4)

Davidmanheim12 Aug 2019 8:07 UTC
15 points
3 comments4 min readLW link

The Credit As­sign­ment Problem

abramdemski8 Nov 2019 2:50 UTC
77 points
40 comments17 min readLW link1 review

Bot­tle Caps Aren’t Optimisers

DanielFilan31 Aug 2018 18:30 UTC
76 points
21 comments3 min readLW link1 review
(danielfilan.com)

Fake Op­ti­miza­tion Criteria

Eliezer Yudkowsky10 Nov 2007 0:10 UTC
56 points
21 comments3 min readLW link

Search ver­sus design

Alex Flint16 Aug 2020 16:53 UTC
86 points
40 comments36 min readLW link1 review

The First World Takeover

Eliezer Yudkowsky19 Nov 2008 15:00 UTC
31 points
24 comments6 min readLW link

Life’s Story Continues

Eliezer Yudkowsky21 Nov 2008 23:05 UTC
20 points
18 comments5 min readLW link

Utility Max­i­miza­tion = De­scrip­tion Length Minimization

johnswentworth18 Feb 2021 18:04 UTC
166 points
37 comments5 min readLW link

Ap­pli­ca­tions for De­con­fus­ing Goal-Directedness

adamShimi8 Aug 2021 13:05 UTC
36 points
0 comments5 min readLW link

A new defi­ni­tion of “op­ti­mizer”

Chantiel9 Aug 2021 13:42 UTC
5 points
0 comments7 min readLW link

Mea­sure­ment, Op­ti­miza­tion, and Take-off Speed

jsteinhardt10 Sep 2021 19:30 UTC
47 points
4 comments13 min readLW link

In Defence of Op­ti­miz­ing Rou­tine Tasks

leogao9 Nov 2021 5:09 UTC
35 points
4 comments3 min readLW link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

18 Nov 2021 22:19 UTC
127 points
61 comments39 min readLW link

Defin­ing “op­ti­mizer”

Chantiel17 Apr 2021 15:38 UTC
9 points
6 comments1 min readLW link

Bits of Op­ti­miza­tion Can Only Be Lost Over A Distance

johnswentworth23 May 2022 18:55 UTC
24 points
15 comments2 min readLW link

Distributed Decisions

johnswentworth29 May 2022 2:43 UTC
59 points
4 comments6 min readLW link

Op­ti­miza­tion power as di­ver­gence from de­fault trajectories

Josh15 Jun 2022 21:50 UTC
9 points
2 comments5 min readLW link

Quan­tify­ing Gen­eral Intelligence

JasonBrown17 Jun 2022 21:57 UTC
8 points
6 comments13 min readLW link

De­grees of Freedom

sarahconstantin2 Apr 2019 21:10 UTC
103 points
31 comments11 min readLW link
(srconstantin.wordpress.com)

He­donic asymmetries

paulfchristiano26 Jan 2020 2:10 UTC
97 points
22 comments2 min readLW link
(sideways-view.com)

De­mons in Im­perfect Search

johnswentworth11 Feb 2020 20:25 UTC
96 points
21 comments3 min readLW link

Tes­sel­lat­ing Hills: a toy model for demons in im­perfect search

DaemonicSigil20 Feb 2020 0:12 UTC
85 points
16 comments2 min readLW link

Align­ing a toy model of optimization

paulfchristiano28 Jun 2019 20:23 UTC
52 points
26 comments3 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_Armstrong7 Apr 2014 11:00 UTC
73 points
417 comments7 min readLW link

Worse Than Random

Eliezer Yudkowsky11 Nov 2008 19:01 UTC
41 points
102 comments12 min readLW link

Effi­cient Cross-Do­main Optimization

Eliezer Yudkowsky28 Oct 2008 16:33 UTC
43 points
37 comments5 min readLW link

Op­ti­miza­tion and the Singularity

Eliezer Yudkowsky23 Jun 2008 5:55 UTC
29 points
21 comments9 min readLW link

Ob­serv­ing Optimization

Eliezer Yudkowsky21 Nov 2008 5:39 UTC
9 points
28 comments6 min readLW link

Satis­ficers want to be­come maximisers

Stuart_Armstrong21 Oct 2011 16:27 UTC
33 points
68 comments1 min readLW link

Evolu­tions Build­ing Evolu­tions: Lay­ers of Gen­er­ate and Test

plex5 Feb 2021 18:21 UTC
11 points
1 comment6 min readLW link

Sur­pris­ing ex­am­ples of non-hu­man optimization

Jan_Rzymkowski14 Jun 2015 17:05 UTC
31 points
9 comments1 min readLW link

Ac­ci­den­tal Optimizers

aysajan22 Sep 2021 13:27 UTC
7 points
2 comments3 min readLW link

Op­ti­miza­tion Con­cepts in the Game of Life

16 Oct 2021 20:51 UTC
66 points
14 comments10 min readLW link

Un­der­stand­ing Gra­di­ent Hacking

peterbarnett10 Dec 2021 15:58 UTC
30 points
5 comments30 min readLW link

Trans­form­ing my­opic op­ti­miza­tion to or­di­nary op­ti­miza­tion—Do we want to seek con­ver­gence for my­opic op­ti­miza­tion prob­lems?

tailcalled11 Dec 2021 20:38 UTC
12 points
1 comment5 min readLW link

Hy­poth­e­sis: gra­di­ent de­scent prefers gen­eral circuits

Quintin Pope8 Feb 2022 21:12 UTC
39 points
26 comments11 min readLW link

Op­ti­miz­ing crop plant­ing with mixed in­te­ger lin­ear pro­gram­ming in Stardew Valley

hapanin5 Apr 2022 18:42 UTC
28 points
1 comment6 min readLW link

Ad­ver­sar­ial at­tacks and op­ti­mal control

Jan22 May 2022 18:22 UTC
16 points
7 comments8 min readLW link
(universalprior.substack.com)

Non-re­solve as Resolve

Linda Linsefors10 Jul 2018 23:31 UTC
15 points
1 comment2 min readLW link

Op­ti­miza­tion and Ad­e­quacy in Five Bullets

james.lucassen6 Jun 2022 5:48 UTC
35 points
2 comments4 min readLW link
(jlucassen.com)

Break­ing Down Goal-Directed Behaviour

Oliver Sourbut16 Jun 2022 18:45 UTC
3 points
1 comment2 min readLW link

Per­ils of op­ti­miz­ing in so­cial contexts

owencb16 Jun 2022 17:40 UTC
42 points
1 comment2 min readLW link

The Limits of Automation

milkandcigarettes23 Jun 2022 18:03 UTC
5 points
1 comment5 min readLW link
(milkandcigarettes.com)
No comments.