RSS

Gears-Level

TagLast edit: 4 Dec 2022 13:55 UTC by B Jacobs

A gears-level model is ‘well-constrained’ in the sense that there is a strong connection between each of the things you observe—it would be hard for you to imagine that one of the variables could be different while all of the others remained the same.

Related Tags: Anticipated Experiences, Double-Crux, Empiricism, Falsifiability, Map and Territory


The term gears-level was first described on LW in the post “Gears in Understanding”:

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don’t know if this list is exhaustive and would be a little surprised if it were:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

An example from Gears in Understanding of a gears-level model is (surprise) a box of gears. If you can see a series of interlocked gears, alternately turning clockwise, then counterclockwise, and so on, then you’re able to anticipate the direction of any given, even if you cannot see it. It would be very difficult to imagine all of the gears turning as they are but only one of them changing direction whilst remaining interlocked. And finally, you would be able to rederive the direction of any given gear if you forgot it.


Note that the author of Gears in Understanding, Valentine, was careful to point out that these tests do not fully define the property ‘gears-level’, and that “Gears-ness is not the same as goodness”—there are other things that are valuable in a model, and many things cannot practically be modelled in this fashion. If you intend to use the term it is highly recommended you read the post beforehand, as the concept is not easily defined.

Gears in understanding

Valentine12 May 2017 0:36 UTC
193 points
38 comments10 min readLW link

Gears-Level Models are Cap­i­tal Investments

johnswentworth22 Nov 2019 22:41 UTC
175 points
28 comments7 min readLW link1 review

Gears vs Behavior

johnswentworth19 Sep 2019 6:50 UTC
114 points
14 comments7 min readLW link1 review

The map has gears. They don’t always turn.

abramdemski22 Feb 2018 20:16 UTC
24 points
0 comments1 min readLW link

Gears Level & Policy Level

abramdemski24 Nov 2017 7:17 UTC
61 points
8 comments7 min readLW link

Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

abramdemski16 Feb 2018 0:44 UTC
86 points
36 comments18 min readLW link1 review

When Gears Go Wrong

Matt Goldenberg2 Aug 2020 6:21 UTC
28 points
6 comments6 min readLW link

The Lens That Sees Its Flaws

Eliezer Yudkowsky23 Sep 2007 0:10 UTC
339 points
45 comments3 min readLW link

Paper-Read­ing for Gears

johnswentworth4 Dec 2019 21:02 UTC
163 points
6 comments4 min readLW link1 review

In praise of fake frameworks

Valentine11 Jul 2017 2:12 UTC
115 points
15 comments7 min readLW link

Tech­nol­ogy Changes Constraints

johnswentworth25 Jan 2020 23:13 UTC
116 points
6 comments4 min readLW link

Science in a High-Di­men­sional World

johnswentworth8 Jan 2021 17:52 UTC
290 points
53 comments7 min readLW link1 review

Evolu­tion of Modularity

johnswentworth14 Nov 2019 6:49 UTC
185 points
12 comments2 min readLW link1 review

Book Re­view: De­sign Prin­ci­ples of Biolog­i­cal Circuits

johnswentworth5 Nov 2019 6:49 UTC
218 points
24 comments12 min readLW link1 review

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer Yudkowsky7 Apr 2018 4:25 UTC
211 points
68 comments13 min readLW link5 reviews

A Crisper Ex­pla­na­tion of Si­mu­lacrum Levels

Thane Ruthenis23 Dec 2023 22:13 UTC
86 points
13 comments13 min readLW link

[Question] What are the best re­sources for build­ing gears-level mod­els of how gov­ern­ments ac­tu­ally work?

adamShimi19 Aug 2024 14:05 UTC
19 points
6 comments1 min readLW link

Con­straints & Slack­ness as a Wor­ld­view Generator

johnswentworth25 Jan 2020 23:18 UTC
55 points
4 comments4 min readLW link

Ma­te­rial Goods as an Abun­dant Resource

johnswentworth25 Jan 2020 23:23 UTC
81 points
10 comments5 min readLW link

Wrinkles

johnswentworth19 Nov 2019 22:59 UTC
81 points
14 comments4 min readLW link

The­ory and Data as Constraints

johnswentworth21 Feb 2020 22:00 UTC
65 points
7 comments4 min readLW link

Homeosta­sis and “Root Causes” in Aging

johnswentworth5 Jan 2020 18:43 UTC
86 points
25 comments3 min readLW link

The Lens, Proge­rias and Polycausality

johnswentworth8 Mar 2020 17:53 UTC
71 points
8 comments3 min readLW link

Adap­tive Im­mune Sys­tem Aging

johnswentworth13 Mar 2020 3:47 UTC
75 points
9 comments3 min readLW link

Ab­strac­tion, Evolu­tion and Gears

johnswentworth24 Jun 2020 17:39 UTC
29 points
11 comments4 min readLW link

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworth6 Jun 2020 20:57 UTC
164 points
44 comments6 min readLW link

A Case for the Least For­giv­ing Take On Alignment

Thane Ruthenis2 May 2023 21:34 UTC
100 points
84 comments22 min readLW link

Ex­pla­na­tion vs Rationalization

abramdemski22 Feb 2018 23:46 UTC
16 points
11 comments4 min readLW link

Time­less Modesty?

abramdemski24 Nov 2017 11:12 UTC
17 points
2 comments3 min readLW link

[Question] The­ory of Causal Models with Dy­namic Struc­ture?

johnswentworth23 Jan 2020 19:47 UTC
24 points
6 comments1 min readLW link

[Question] What is the right phrase for “the­o­ret­i­cal ev­i­dence”?

Adam Zerner1 Nov 2020 20:43 UTC
23 points
41 comments2 min readLW link

In­side Views, Im­pos­tor Syn­drome, and the Great LARP

johnswentworth25 Sep 2023 16:08 UTC
331 points
53 comments5 min readLW link

De­bug­ging the student

Adam Zerner16 Dec 2020 7:07 UTC
46 points
7 comments4 min readLW link

A Good Ex­pla­na­tion of Differ­en­tial Gears

Johannes C. Mayer19 Oct 2023 23:07 UTC
47 points
4 comments1 min readLW link
(youtu.be)

Believ­ing vs understanding

Adam Zerner24 Jul 2021 3:39 UTC
15 points
2 comments6 min readLW link

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

Zvi15 Nov 2021 3:50 UTC
197 points
49 comments16 min readLW link
(thezvi.wordpress.com)

rough draft on what hap­pens in the brain when you have an insight

Emrik21 May 2024 18:02 UTC
11 points
2 comments1 min readLW link

The Gears of Argmax

StrivingForLegibility4 Jan 2024 23:30 UTC
11 points
0 comments3 min readLW link

Real­ity has a sur­pris­ing amount of detail

jsalvatier13 May 2017 20:02 UTC
82 points
30 comments1 min readLW link
(johnsalvatier.org)

Ad­miring the Guts of Things.

Melkor11 Jun 2018 23:12 UTC
22 points
1 comment3 min readLW link

in­ter­pret­ing GPT: the logit lens

nostalgebraist31 Aug 2020 2:47 UTC
223 points
37 comments11 min readLW link

Towards Gears-Level Un­der­stand­ing of Agency

Thane Ruthenis16 Jun 2022 22:00 UTC
25 points
4 comments18 min readLW link

A Sketch of Good Communication

Ben Pace31 Mar 2018 22:48 UTC
201 points
35 comments3 min readLW link1 review

Re­think­ing Batch Normalization

Matthew Barnett2 Aug 2019 20:21 UTC
20 points
5 comments8 min readLW link

Value For­ma­tion: An Over­ar­ch­ing Model

Thane Ruthenis15 Nov 2022 17:16 UTC
34 points
20 comments34 min readLW link

Cur­rent themes in mechanis­tic in­ter­pretabil­ity research

16 Nov 2022 14:14 UTC
89 points
2 comments12 min readLW link

Leg­i­bil­ity Makes Log­i­cal Line-Of-Sight Transitive

StrivingForLegibility19 Jan 2024 23:39 UTC
13 points
0 comments5 min readLW link

Anatomy of a Gear

johnswentworth16 Nov 2020 16:34 UTC
79 points
12 comments7 min readLW link

[Question] By which mechanism does im­mu­nity fa­vor new Covid var­i­ants?

anorangicc4 Apr 2021 10:24 UTC
2 points
5 comments1 min readLW link

Be­ware of black boxes in AI al­ign­ment research

cousin_it18 Jan 2018 15:07 UTC
39 points
10 comments1 min readLW link

What is Life in an Im­moral Maze?

Zvi5 Jan 2020 13:40 UTC
71 points
56 comments5 min readLW link
(thezvi.wordpress.com)

Don’t want Good­hart? — Spec­ify the vari­ables more

YanLyutnev21 Nov 2024 22:43 UTC
2 points
2 comments5 min readLW link

The Fu­til­ity of Emergence

Eliezer Yudkowsky26 Aug 2007 22:10 UTC
104 points
142 comments3 min readLW link

De­ci­sion Trans­former Interpretability

6 Feb 2023 7:29 UTC
84 points
13 comments24 min readLW link

Don’t want Good­hart? — Spec­ify the damn variables

Ян Лютнев21 Nov 2024 22:45 UTC
−5 points
0 comments5 min readLW link

Gen­er­al­iz­ing Ex­per­i­men­tal Re­sults by Lev­er­ag­ing Knowl­edge of Mechanisms

Carlos_Cinelli11 Dec 2019 20:39 UTC
50 points
5 comments1 min readLW link

Ar­tifi­cial Addition

Eliezer Yudkowsky20 Nov 2007 7:58 UTC
90 points
128 comments6 min readLW link

Dreams of AI Design

Eliezer Yudkowsky27 Aug 2008 4:04 UTC
40 points
61 comments5 min readLW link

Why Artists Study Anatomy

Sisi Cheng18 May 2020 18:44 UTC
98 points
10 comments2 min readLW link1 review
No comments.