RSS

Distil­la­tion & Pedagogy

TagLast edit: 21 Aug 2020 22:31 UTC by Raemon

Distillation is the process of taking a complex subject, and making it easier to understand. Pedagogy is the method and practice of teaching. A good intellectual pipeline requires not just discovering new ideas, but making it easier for newcomers to learn them, stand on the shoulders of giants, and discover even more ideas.

Chris Olah, founder of distill.pub, writes in his essay Research Debt:

Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run. Managers talk about institutional debt: institutions can grow quickly at the cost of bad practices creeping in. Both are easy to accumulate but hard to get rid of.

Research can also have debt. It comes in several forms:

  • Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don’t appreciate how much better things could be.

  • Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking.

  • Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they’re bad. For example, an object with extra electrons is negative, and pi is wrong.

  • Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there’s no easy way to filter or summarize them.Because most work is explained poorly, it takes a lot of energy to understand each piece of work. For many papers, one wants a simple one sentence explanation of it, but needs to fight with it to get that sentence. Because the simplest way to get the attention of interested parties is to get everyone’s attention, we get flooded with work. Because we incentivize people being “prolific,” we get flooded with a lot of work… We think noise is the main way experts experience research debt.

The insidious thing about research debt is that it’s normal. Everyone takes it for granted, and doesn’t realize that things could be different. For example, it’s normal to give very mediocre explanations of research, and people perceive that to be the ceiling of explanation quality. On the rare occasions that truly excellent explanations come along, people see them as one-off miracles rather than a sign that we could systematically be doing better.

See also Scholarship and Learning, and Good Explanations.

How to teach things well

Neel Nanda28 Aug 2020 16:44 UTC
100 points
15 comments15 min readLW link1 review
(www.neelnanda.io)

Re­search Debt

Elizabeth15 Jul 2018 19:36 UTC
24 points
2 comments1 min readLW link
(distill.pub)

[Question] What are Ex­am­ples of Great Distil­lers?

adamShimi12 Nov 2020 14:09 UTC
32 points
12 comments1 min readLW link

Ex­plain­ers Shoot High. Aim Low!

Eliezer Yudkowsky24 Oct 2007 1:13 UTC
87 points
34 comments1 min readLW link

Learn­ing how to learn

Neel Nanda30 Sep 2020 16:50 UTC
35 points
0 comments15 min readLW link
(www.neelnanda.io)

In­fra-Bayesi­anism Unwrapped

adamShimi20 Jan 2021 13:35 UTC
41 points
0 comments24 min readLW link

Call For Distillers

johnswentworth4 Apr 2022 18:25 UTC
192 points
42 comments3 min readLW link

DARPA Digi­tal Tu­tor: Four Months to To­tal Tech­ni­cal Ex­per­tise?

JohnBuridan6 Jul 2020 23:34 UTC
200 points
19 comments7 min readLW link

Ex­pan­sive trans­la­tions: con­sid­er­a­tions and possibilities

ozziegooen18 Sep 2020 15:39 UTC
43 points
15 comments6 min readLW link

TAPs for Tutoring

Mark Xu24 Dec 2020 20:46 UTC
27 points
3 comments5 min readLW link

(Sum­mary) Se­quence High­lights—Think­ing Bet­ter on Purpose

qazzquimby2 Aug 2022 17:45 UTC
32 points
3 comments11 min readLW link

Ex­per­tise and advice

John_Maxwell27 May 2012 1:49 UTC
25 points
4 comments1 min readLW link

Avoid Un­nec­es­sar­ily Poli­ti­cal Examples

Raemon11 Jan 2021 5:41 UTC
104 points
42 comments3 min readLW link

Dis­cov­ery fic­tion for the Pythagorean theorem

riceissa19 Jan 2021 2:09 UTC
15 points
1 comment4 min readLW link

In­ver­sion of the­o­rems into defi­ni­tions when generalizing

riceissa4 Aug 2019 17:44 UTC
25 points
3 comments5 min readLW link

Think like an ed­u­ca­tor about code quality

Adam Zerner27 Mar 2021 5:43 UTC
44 points
8 comments8 min readLW link

99% shorter

philh27 May 2021 19:50 UTC
16 points
0 comments6 min readLW link
(reasonableapproximation.net)

An Ap­pren­tice Ex­per­i­ment in Python Programming

4 Jul 2021 3:29 UTC
66 points
4 comments9 min readLW link

An Ap­pren­tice Ex­per­i­ment in Python Pro­gram­ming, Part 2

29 Jul 2021 7:39 UTC
30 points
18 comments10 min readLW link

Cal­ibra­tion proverbs

Malmesbury11 Jan 2022 5:11 UTC
75 points
19 comments1 min readLW link

Job Offer­ing: Help Com­mu­ni­cate Infrabayesianism

23 Mar 2022 18:35 UTC
135 points
21 comments1 min readLW link

Sum­mary: “How to Write Quickly...” by John Wentworth

Pablo Repetto11 Apr 2022 23:26 UTC
3 points
0 comments2 min readLW link
(pabloernesto.github.io)

[Question] What to in­clude in a guest lec­ture on ex­is­ten­tial risks from AI?

Aryeh Englander13 Apr 2022 17:03 UTC
20 points
9 comments1 min readLW link

Fea­tures that make a re­port es­pe­cially helpful to me

lukeprog14 Apr 2022 1:12 UTC
37 points
0 comments2 min readLW link

Ra­tion­al­ity Dojo

lsusr24 Apr 2022 0:53 UTC
13 points
5 comments1 min readLW link

Cal­ling for Stu­dent Sub­mis­sions: AI Safety Distil­la­tion Contest

Aris24 Apr 2022 1:53 UTC
48 points
15 comments4 min readLW link

In­fra-Bayesi­anism Distil­la­tion: Real­iz­abil­ity and De­ci­sion Theory

Thomas Larsen26 May 2022 21:57 UTC
30 points
9 comments18 min readLW link

[Re­quest for Distil­la­tion] Co­her­ence of Distributed De­ci­sions With Differ­ent In­puts Im­plies Conditioning

johnswentworth25 Apr 2022 17:01 UTC
22 points
14 comments2 min readLW link

How to get peo­ple to pro­duce more great ex­po­si­tion? Some strate­gies and their assumptions

riceissa25 May 2022 22:30 UTC
26 points
10 comments3 min readLW link

Ex­po­si­tion as sci­ence: some ideas for how to make progress

riceissa8 Jul 2022 1:29 UTC
14 points
0 comments8 min readLW link

A dis­til­la­tion of Evan Hub­inger’s train­ing sto­ries (for SERI MATS)

Daphne_W18 Jul 2022 3:38 UTC
15 points
1 comment10 min readLW link

Pit­falls with Proofs

scasper19 Jul 2022 22:21 UTC
19 points
21 comments8 min readLW link

Distil­la­tion Con­test—Re­sults and Recap

Aris29 Jul 2022 17:40 UTC
33 points
0 comments7 min readLW link

[Question] Which in­tro-to-AI-risk text would you recom­mend to...

Sherrinford1 Aug 2022 9:36 UTC
12 points
1 comment1 min readLW link

Seek­ing PCK (Ped­a­gog­i­cal Con­tent Knowl­edge)

CFAR!Duncan12 Aug 2022 4:15 UTC
35 points
9 comments5 min readLW link

AI al­ign­ment as “nav­i­gat­ing the space of in­tel­li­gent be­havi­our”

Nora_Ammann23 Aug 2022 13:28 UTC
18 points
0 comments6 min readLW link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni Angelou1 Sep 2022 16:57 UTC
7 points
8 comments3 min readLW link

How To Know What the AI Knows—An ELK Distillation

Fabien Roger4 Sep 2022 0:46 UTC
5 points
0 comments5 min readLW link

Sum­maries: Align­ment Fun­da­men­tals Curriculum

Leon Lang18 Sep 2022 13:08 UTC
43 points
3 comments1 min readLW link
(docs.google.com)

Power-Seek­ing AI and Ex­is­ten­tial Risk

Antonio Franca11 Oct 2022 22:50 UTC
5 points
0 comments9 min readLW link

Real-Time Re­search Record­ing: Can a Trans­former Re-Derive Po­si­tional Info?

Neel Nanda1 Nov 2022 23:56 UTC
68 points
14 comments1 min readLW link
(youtu.be)

Distil­la­tion Ex­per­i­ment: Chunk-Knitting

AllAmericanBreakfast7 Nov 2022 19:56 UTC
9 points
1 comment6 min readLW link

The No Free Lunch the­o­rem for dummies

Steven Byrnes5 Dec 2022 21:46 UTC
28 points
15 comments3 min readLW link

Does any­one use ad­vanced me­dia pro­jects?

ryan_b20 Jun 2018 23:33 UTC
33 points
5 comments1 min readLW link

Teach­ing the Unteachable

Eliezer Yudkowsky3 Mar 2009 23:14 UTC
52 points
18 comments6 min readLW link

The Fun­da­men­tal Ques­tion—Ra­tion­al­ity com­puter game design

Kaj_Sotala13 Feb 2013 13:45 UTC
61 points
68 comments9 min readLW link

Zetetic explanation

Benquo27 Aug 2018 0:12 UTC
88 points
138 comments6 min readLW link
(benjaminrosshoffman.com)

Pa­ter­nal Formats

abramdemski9 Jun 2019 1:26 UTC
50 points
35 comments2 min readLW link

Teach­able Ra­tion­al­ity Skills

Eliezer Yudkowsky27 May 2011 21:57 UTC
72 points
263 comments1 min readLW link

Five-minute ra­tio­nal­ity techniques

sketerpot10 Aug 2010 2:24 UTC
71 points
237 comments2 min readLW link

Just One Sentence

Eliezer Yudkowsky5 Jan 2013 1:27 UTC
64 points
142 comments1 min readLW link

Me­dia bias

PhilGoetz5 Jul 2009 16:54 UTC
39 points
47 comments1 min readLW link

The RAIN Frame­work for In­for­ma­tional Effectiveness

ozziegooen13 Feb 2019 12:54 UTC
35 points
16 comments6 min readLW link

The Up-Goer Five Game: Ex­plain­ing hard ideas with sim­ple words

Rob Bensinger5 Sep 2013 5:54 UTC
44 points
82 comments2 min readLW link

Ra­tion­al­ity Games & Apps Brainstorming

lukeprog9 Jul 2012 3:04 UTC
42 points
59 comments2 min readLW link

How not to be a Naïve Computationalist

diegocaleiro13 Apr 2011 19:45 UTC
38 points
36 comments2 min readLW link

Dense Math Notation

JK_Ravenclaw1 Apr 2011 3:37 UTC
33 points
23 comments1 min readLW link

Numer­acy ne­glect—A per­sonal postmortem

vlad.proex27 Sep 2020 15:12 UTC
80 points
29 comments9 min readLW link

Moved from Moloch’s Toolbox: Dis­cus­sion re style of lat­est Eliezer sequence

habryka5 Nov 2017 2:22 UTC
7 points
1 comment3 min readLW link

Short Primers on Cru­cial Topics

lukeprog31 May 2012 0:46 UTC
35 points
24 comments1 min readLW link

Great Explanations

lukeprog31 Oct 2011 23:58 UTC
34 points
116 comments2 min readLW link

A LessWrong “ra­tio­nal­ity work­book” idea

jwhendy9 Jan 2011 17:52 UTC
25 points
26 comments3 min readLW link

De­bug­ging the student

Adam Zerner16 Dec 2020 7:07 UTC
43 points
7 comments4 min readLW link

Ret­ro­spec­tive on Teach­ing Ra­tion­al­ity Workshops

Neel Nanda3 Jan 2021 17:15 UTC
59 points
2 comments31 min readLW link

[Question] What cur­rents of thought on LessWrong do you want to see dis­til­led?

ryan_b8 Jan 2021 21:43 UTC
48 points
19 comments1 min readLW link

An Ap­pren­tice Ex­per­i­ment in Python Pro­gram­ming, Part 3

16 Aug 2021 4:42 UTC
14 points
11 comments22 min readLW link

Distill­ing and ap­proaches to the determinant

AprilSR6 Apr 2022 6:34 UTC
6 points
0 comments6 min readLW link

Deriv­ing Con­di­tional Ex­pected Utility from Pareto-Effi­cient Decisions

Thomas Kwa5 May 2022 3:21 UTC
23 points
1 comment6 min readLW link

How RL Agents Be­have When Their Ac­tions Are Mod­ified? [Distil­la­tion post]

PabloAMC20 May 2022 18:47 UTC
21 points
0 comments8 min readLW link

The Solomonoff Prior is Malign

Mark Xu14 Oct 2020 1:33 UTC
148 points
52 comments16 min readLW link3 reviews

Univer­sal­ity Unwrapped

adamShimi21 Aug 2020 18:53 UTC
28 points
2 comments18 min readLW link

Imi­ta­tive Gen­er­al­i­sa­tion (AKA ‘Learn­ing the Prior’)

Beth Barnes10 Jan 2021 0:30 UTC
92 points
14 comments12 min readLW link

Does SGD Pro­duce De­cep­tive Align­ment?

Mark Xu6 Nov 2020 23:48 UTC
85 points
6 comments16 min readLW link

Ex­plain­ing in­ner al­ign­ment to myself

Jeremy Gillen24 May 2022 23:10 UTC
9 points
2 comments10 min readLW link

Croe­sus, Cer­berus, and the mag­pies: a gen­tle in­tro­duc­tion to Elic­it­ing La­tent Knowledge

Alexandre Variengien27 May 2022 17:58 UTC
14 points
0 comments16 min readLW link

De­con­fus­ing Lan­dauer’s Principle

euanmclean27 May 2022 17:58 UTC
44 points
12 comments15 min readLW link

Un­der­stand­ing Selec­tion Theorems

adamk28 May 2022 1:49 UTC
35 points
3 comments7 min readLW link

Distil­led—AGI Safety from First Principles

Harrison G29 May 2022 0:57 UTC
8 points
1 comment14 min readLW link

Abram Dem­ski’s ELK thoughts and pro­posal—distillation

Rubi J. Hudson19 Jul 2022 6:57 UTC
15 points
4 comments16 min readLW link

AI Safety Cheat­sheet /​ Quick Reference

Zohar Jackson20 Jul 2022 9:39 UTC
3 points
0 comments1 min readLW link
(github.com)

An­nounc­ing the Distil­la­tion for Align­ment Practicum (DAP)

18 Aug 2022 19:50 UTC
21 points
3 comments3 min readLW link

Epistemic Arte­facts of (con­cep­tual) AI al­ign­ment research

19 Aug 2022 17:18 UTC
30 points
1 comment5 min readLW link

Deep Q-Net­works Explained

Jay Bailey13 Sep 2022 12:01 UTC
37 points
4 comments22 min readLW link

Un­der­stand­ing In­fra-Bayesi­anism: A Begin­ner-Friendly Video Series

22 Sep 2022 13:25 UTC
113 points
6 comments2 min readLW link

Distil­la­tion of “How Likely Is De­cep­tive Align­ment?”

NickGabs18 Nov 2022 16:31 UTC
20 points
3 comments10 min readLW link

MIRI’s “Death with Dig­nity”, but in 80 sec­onds.

strawberry calm6 Dec 2022 17:18 UTC
14 points
3 comments1 min readLW link