RSS

Gears-Level

TagLast edit: 20 Aug 2020 19:10 UTC by brook

A gears-level model is ‘well-constrained’ in the sense that there is a strong connection between each of the things you observe—it would be hard for you to imagine that one of the variables could be different while all of the others remained the same.

Related Tags: Anticipated Experiences, Double Crux, Empiricism, Falsifiability, Map and Territory


The term gears-level was first described on LW in the post “Gears in Understanding”:

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don’t know if this list is exhaustive and would be a little surprised if it were:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

An example from Gears in Understanding of a gears-level model is (surprise) a box of gears. If you can see a series of interlocked gears, alternately turning clockwise, then counterclockwise, and so on, then you’re able to anticipate the direction of any given, even if you cannot see it. It would be very difficult to imagine all of the gears turning as they are but only one of them changing direction whilst remaining interlocked. And finally, you would be able to rederive the direction of any given gear if you forgot it.


Note that the author of Gears in Understanding, Valentine, was careful to point out that these tests do not fully define the property ‘gears-level’, and that “Gears-ness is not the same as goodness”—there are other things that are valuable in a model, and many things cannot practically be modelled in this fashion. If you intend to use the term it is highly recommended you read the post beforehand, as the concept is not easily defined.

Gears in understanding

Valentine12 May 2017 0:36 UTC
140 points
37 comments10 min readLW link

Gears-Level Models are Cap­i­tal Investments

johnswentworth22 Nov 2019 22:41 UTC
122 points
30 comments7 min readLW link2 nominations1 review

Gears vs Behavior

johnswentworth19 Sep 2019 6:50 UTC
65 points
11 comments7 min readLW link2 nominations1 review

The map has gears. They don’t always turn.

abramdemski22 Feb 2018 20:16 UTC
21 points
0 comments1 min readLW link

Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

abramdemski16 Feb 2018 0:44 UTC
82 points
36 comments18 min readLW link

The Lens That Sees Its Flaws

Eliezer Yudkowsky23 Sep 2007 0:10 UTC
116 points
40 comments3 min readLW link

When Gears Go Wrong

Matt Goldenberg2 Aug 2020 6:21 UTC
28 points
6 comments6 min readLW link

Paper-Read­ing for Gears

johnswentworth4 Dec 2019 21:02 UTC
130 points
6 comments4 min readLW link2 nominations1 review

In praise of fake frameworks

Valentine11 Jul 2017 2:12 UTC
79 points
14 comments7 min readLW link

Science in a High-Di­men­sional World

johnswentworth8 Jan 2021 17:52 UTC
131 points
33 comments7 min readLW link

Tech­nol­ogy Changes Constraints

johnswentworth25 Jan 2020 23:13 UTC
88 points
6 comments4 min readLW link

Con­straints & Slack­ness as a Wor­ld­view Generator

johnswentworth25 Jan 2020 23:18 UTC
40 points
2 comments4 min readLW link

Ma­te­rial Goods as an Abun­dant Resource

johnswentworth25 Jan 2020 23:23 UTC
64 points
8 comments5 min readLW link

Wrinkles

johnswentworth19 Nov 2019 22:59 UTC
64 points
15 comments4 min readLW link

The­ory and Data as Constraints

johnswentworth21 Feb 2020 22:00 UTC
48 points
6 comments4 min readLW link

Homeosta­sis and “Root Causes” in Aging

johnswentworth5 Jan 2020 18:43 UTC
75 points
25 comments3 min readLW link

The Lens, Proge­rias and Polycausality

johnswentworth8 Mar 2020 17:53 UTC
63 points
8 comments3 min readLW link

Adap­tive Im­mune Sys­tem Aging

johnswentworth13 Mar 2020 3:47 UTC
69 points
9 comments3 min readLW link

Ab­strac­tion, Evolu­tion and Gears

johnswentworth24 Jun 2020 17:39 UTC
25 points
11 comments4 min readLW link

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworth6 Jun 2020 20:57 UTC
135 points
36 comments6 min readLW link

Evolu­tion of Modularity

johnswentworth14 Nov 2019 6:49 UTC
133 points
9 comments2 min readLW link2 nominations1 review

Book Re­view: De­sign Prin­ci­ples of Biolog­i­cal Circuits

johnswentworth5 Nov 2019 6:49 UTC
159 points
22 comments12 min readLW link2 nominations1 review

Ex­pla­na­tion vs Rationalization

abramdemski22 Feb 2018 23:46 UTC
15 points
11 comments4 min readLW link

Time­less Modesty?

abramdemski24 Nov 2017 11:12 UTC
16 points
2 comments3 min readLW link

[Question] The­ory of Causal Models with Dy­namic Struc­ture?

johnswentworth23 Jan 2020 19:47 UTC
24 points
7 comments1 min readLW link

[Question] What is the right phrase for “the­o­ret­i­cal ev­i­dence”?

adamzerner1 Nov 2020 20:43 UTC
24 points
41 comments2 min readLW link

De­bug­ging the student

adamzerner16 Dec 2020 7:07 UTC
41 points
7 comments4 min readLW link

Believ­ing vs understanding

adamzerner24 Jul 2021 3:39 UTC
15 points
2 comments6 min readLW link

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer Yudkowsky7 Apr 2018 4:25 UTC
133 points
65 comments13 min readLW link

Gears Level & Policy Level

abramdemski24 Nov 2017 7:17 UTC
47 points
8 comments7 min readLW link

Be­ware of black boxes in AI al­ign­ment research

cousin_it18 Jan 2018 15:07 UTC
39 points
10 comments1 min readLW link

What is Life in an Im­moral Maze?

Zvi5 Jan 2020 13:40 UTC
66 points
56 comments5 min readLW link
(thezvi.wordpress.com)

The Fu­til­ity of Emergence

Eliezer Yudkowsky26 Aug 2007 22:10 UTC
69 points
140 comments3 min readLW link

Gen­er­al­iz­ing Ex­per­i­men­tal Re­sults by Lev­er­ag­ing Knowl­edge of Mechanisms

Carlos_Cinelli11 Dec 2019 20:39 UTC
49 points
5 comments1 min readLW link

Ar­tifi­cial Addition

Eliezer Yudkowsky20 Nov 2007 7:58 UTC
59 points
127 comments6 min readLW link

Dreams of AI Design

Eliezer Yudkowsky27 Aug 2008 4:04 UTC
20 points
61 comments5 min readLW link

Real­ity has a sur­pris­ing amount of detail

jsalvatier13 May 2017 20:02 UTC
62 points
21 comments1 min readLW link
(johnsalvatier.org)

Ad­miring the Guts of Things.

Melkor11 Jun 2018 23:12 UTC
21 points
1 comment3 min readLW link

in­ter­pret­ing GPT: the logit lens

nostalgebraist31 Aug 2020 2:47 UTC
113 points
32 comments11 min readLW link

Re­think­ing Batch Normalization

Matthew Barnett2 Aug 2019 20:21 UTC
20 points
5 comments8 min readLW link

Anatomy of a Gear

johnswentworth16 Nov 2020 16:34 UTC
74 points
12 comments7 min readLW link

[Question] By which mechanism does im­mu­nity fa­vor new Covid var­i­ants?

anorangic4 Apr 2021 10:24 UTC
2 points
5 comments1 min readLW link

Why Artists Study Anatomy

Sisi Cheng18 May 2020 18:44 UTC
86 points
9 comments2 min readLW link
No comments.