RSS

Gears-Level

TagLast edit: 4 Dec 2022 13:55 UTC by Bob Jacobs

A gears-level model is ‘well-constrained’ in the sense that there is a strong connection between each of the things you observe—it would be hard for you to imagine that one of the variables could be different while all of the others remained the same.

Related Tags: Anticipated Experiences, Double-Crux, Empiricism, Falsifiability, Map and Territory


The term gears-level was first described on LW in the post “Gears in Understanding”:

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don’t know if this list is exhaustive and would be a little surprised if it were:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

An example from Gears in Understanding of a gears-level model is (surprise) a box of gears. If you can see a series of interlocked gears, alternately turning clockwise, then counterclockwise, and so on, then you’re able to anticipate the direction of any given, even if you cannot see it. It would be very difficult to imagine all of the gears turning as they are but only one of them changing direction whilst remaining interlocked. And finally, you would be able to rederive the direction of any given gear if you forgot it.


Note that the author of Gears in Understanding, Valentine, was careful to point out that these tests do not fully define the property ‘gears-level’, and that “Gears-ness is not the same as goodness”—there are other things that are valuable in a model, and many things cannot practically be modelled in this fashion. If you intend to use the term it is highly recommended you read the post beforehand, as the concept is not easily defined.

Gears in understanding

Valentine12 May 2017 0:36 UTC
159 points
38 comments10 min readLW link

Gears-Level Models are Cap­i­tal Investments

johnswentworth22 Nov 2019 22:41 UTC
143 points
30 comments7 min readLW link1 review

Gears vs Behavior

johnswentworth19 Sep 2019 6:50 UTC
84 points
12 comments7 min readLW link1 review

The map has gears. They don’t always turn.

abramdemski22 Feb 2018 20:16 UTC
21 points
0 comments1 min readLW link

Gears Level & Policy Level

abramdemski24 Nov 2017 7:17 UTC
58 points
8 comments7 min readLW link

Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

abramdemski16 Feb 2018 0:44 UTC
83 points
36 comments18 min readLW link1 review

The Lens That Sees Its Flaws

Eliezer Yudkowsky23 Sep 2007 0:10 UTC
194 points
41 comments3 min readLW link

When Gears Go Wrong

Matt Goldenberg2 Aug 2020 6:21 UTC
28 points
6 comments6 min readLW link

Paper-Read­ing for Gears

johnswentworth4 Dec 2019 21:02 UTC
149 points
6 comments4 min readLW link1 review

Tech­nol­ogy Changes Constraints

johnswentworth25 Jan 2020 23:13 UTC
106 points
6 comments4 min readLW link

In praise of fake frameworks

Valentine11 Jul 2017 2:12 UTC
85 points
14 comments7 min readLW link

Science in a High-Di­men­sional World

johnswentworth8 Jan 2021 17:52 UTC
248 points
51 comments7 min readLW link

Con­straints & Slack­ness as a Wor­ld­view Generator

johnswentworth25 Jan 2020 23:18 UTC
51 points
4 comments4 min readLW link

Ma­te­rial Goods as an Abun­dant Resource

johnswentworth25 Jan 2020 23:23 UTC
75 points
9 comments5 min readLW link

Wrinkles

johnswentworth19 Nov 2019 22:59 UTC
77 points
15 comments4 min readLW link

The­ory and Data as Constraints

johnswentworth21 Feb 2020 22:00 UTC
59 points
6 comments4 min readLW link

Homeosta­sis and “Root Causes” in Aging

johnswentworth5 Jan 2020 18:43 UTC
79 points
25 comments3 min readLW link

The Lens, Proge­rias and Polycausality

johnswentworth8 Mar 2020 17:53 UTC
70 points
8 comments3 min readLW link

Adap­tive Im­mune Sys­tem Aging

johnswentworth13 Mar 2020 3:47 UTC
73 points
9 comments3 min readLW link

Ab­strac­tion, Evolu­tion and Gears

johnswentworth24 Jun 2020 17:39 UTC
29 points
11 comments4 min readLW link

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworth6 Jun 2020 20:57 UTC
152 points
37 comments6 min readLW link

Evolu­tion of Modularity

johnswentworth14 Nov 2019 6:49 UTC
159 points
12 comments2 min readLW link1 review

Book Re­view: De­sign Prin­ci­ples of Biolog­i­cal Circuits

johnswentworth5 Nov 2019 6:49 UTC
198 points
24 comments12 min readLW link1 review

Ex­pla­na­tion vs Rationalization

abramdemski22 Feb 2018 23:46 UTC
16 points
11 comments4 min readLW link

Time­less Modesty?

abramdemski24 Nov 2017 11:12 UTC
17 points
2 comments3 min readLW link

[Question] The­ory of Causal Models with Dy­namic Struc­ture?

johnswentworth23 Jan 2020 19:47 UTC
24 points
7 comments1 min readLW link

[Question] What is the right phrase for “the­o­ret­i­cal ev­i­dence”?

Adam Zerner1 Nov 2020 20:43 UTC
24 points
41 comments2 min readLW link

De­bug­ging the student

Adam Zerner16 Dec 2020 7:07 UTC
43 points
7 comments4 min readLW link

Believ­ing vs understanding

Adam Zerner24 Jul 2021 3:39 UTC
15 points
2 comments6 min readLW link

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer Yudkowsky7 Apr 2018 4:25 UTC
155 points
65 comments13 min readLW link5 reviews

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

Zvi15 Nov 2021 3:50 UTC
204 points
48 comments16 min readLW link
(thezvi.wordpress.com)

Be­ware of black boxes in AI al­ign­ment research

cousin_it18 Jan 2018 15:07 UTC
39 points
10 comments1 min readLW link

What is Life in an Im­moral Maze?

Zvi5 Jan 2020 13:40 UTC
67 points
56 comments5 min readLW link
(thezvi.wordpress.com)

The Fu­til­ity of Emergence

Eliezer Yudkowsky26 Aug 2007 22:10 UTC
83 points
141 comments3 min readLW link

Gen­er­al­iz­ing Ex­per­i­men­tal Re­sults by Lev­er­ag­ing Knowl­edge of Mechanisms

Carlos_Cinelli11 Dec 2019 20:39 UTC
50 points
5 comments1 min readLW link

Ar­tifi­cial Addition

Eliezer Yudkowsky20 Nov 2007 7:58 UTC
68 points
129 comments6 min readLW link

Dreams of AI Design

Eliezer Yudkowsky27 Aug 2008 4:04 UTC
26 points
61 comments5 min readLW link

Real­ity has a sur­pris­ing amount of detail

jsalvatier13 May 2017 20:02 UTC
65 points
24 comments1 min readLW link
(johnsalvatier.org)

Ad­miring the Guts of Things.

Melkor11 Jun 2018 23:12 UTC
21 points
1 comment3 min readLW link

in­ter­pret­ing GPT: the logit lens

nostalgebraist31 Aug 2020 2:47 UTC
157 points
32 comments11 min readLW link

Re­think­ing Batch Normalization

Matthew Barnett2 Aug 2019 20:21 UTC
20 points
5 comments8 min readLW link

Anatomy of a Gear

johnswentworth16 Nov 2020 16:34 UTC
76 points
12 comments7 min readLW link

[Question] By which mechanism does im­mu­nity fa­vor new Covid var­i­ants?

anorangicc4 Apr 2021 10:24 UTC
2 points
5 comments1 min readLW link

Why Artists Study Anatomy

Sisi Cheng18 May 2020 18:44 UTC
94 points
10 comments2 min readLW link1 review

Towards Gears-Level Un­der­stand­ing of Agency

Thane Ruthenis16 Jun 2022 22:00 UTC
24 points
4 comments18 min readLW link

A Sketch of Good Communication

Ben Pace31 Mar 2018 22:48 UTC
155 points
35 comments3 min readLW link1 review

Value For­ma­tion: An Over­ar­ch­ing Model

Thane Ruthenis15 Nov 2022 17:16 UTC
20 points
0 comments34 min readLW link

Cur­rent themes in mechanis­tic in­ter­pretabil­ity research

16 Nov 2022 14:14 UTC
82 points
3 comments12 min readLW link