I am Issa Rice. https://issarice.com/
For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or “correct course” based on student confusion. This ability to “use a knowledgeable human” in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably “copy” this knowledge to other humans? -- that’s the exposition problem. (This framing is basically Bloom’s 2 sigma problem.)
That’s very exciting to me! I personally study how science worked and failed historically and epistemic progress and vigilance in general to make alignment go faster and better, so I’ll be interested to discuss exposition as a science with you (and maybe give feedback on your follow-up posts if you want. ;) )
Cool! I just shared my draft post with you that goes into detail about the “exposition as science” strategy; if that post seems interesting to you, I’d be happy to discuss more with you (or you can just leave comments on the post if that is easier).
Doesn’t do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn’t do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I’m not understanding what you’re modeling as ridiculous.
My understanding of the history is that Eliezer did not realize the importance of alignment at first, and that he only did so later after arguing about it online with people like Nick Bostrom. See e.g. this thread. I don’t know enough of the history here, but it also seems logically possible that Bostrom could have, say, only realized the importance of alignment after conversing with other people who also didn’t realize the importance of alignment. In that case, there might be a “bubble” of humans who together satisfy the null string criterion, but no single human who does.
The null string criterion does seem a bit silly nowadays since I think the people who would have satisfied it would have sooner read about AI risk on e.g. LessWrong. So they wouldn’t even have the chance to live to age ~21 to see if they spontaneously invent the ideas.
With help from David Manheim, this post has now been turned into a paper. Thanks to everyone who commented on the post!
Would you say you are traumatized/did unschooling traumatize you/did attending the public high school and college traumatize you?
Do you have a sense of where your anxiety/distractability/”minor mental health problems” came from?
What was the chain of events leading up to you discovering LessWrong/the rationality community?
Vipul Naik has discovered that Alfred Marshall had basically the same idea (he even used the phrase “burn the mathematics”!) way back in 1906 (!), although he only described the procedure as a way to do economics research, rather than for decision-making. I’ve edited the wiki page to incorporate this information.
Thanks, I have added the quote to the page.
Lately I have been daydreaming about a mathematical monastery. I don’t know how coherent the idea is, and would be curious to hear feedback.
A mathematical monastery is a physical space where people gather to do a particular kind of math. The two main activities taking place in a mathematical monastery are meditative math and meditation about one’s relationship to math.
Meditative math: I think a lot of math that people do happens in a fast-paced and unreflective way. What I mean by this is that people solve a bunch of exercises, and then move on quickly to the next thing. There is a rush to finish the problem set or textbook or course and to progress to the main theorems or a more advanced course or the frontier of knowledge so that one might add to it. I think all of this can be good. But sometimes it’s nice to slow way down, to focus on the basics, or pay attention to how one’s mind is representing the mathematical object, or pay attention to how one just solved a problem. What associations did my mind make? Can I write down a stream-of-consciousness log of how I solved a problem? Did I get a gut sense of how long a problem would take me, and how reliable was that gut sense? Are the pictures I see in my head the same as the ones you see in yours? How did the first person who figured this out do so, and what was going on in their mind? Or how might someone have discovered this, even if it is not historically accurate? If I make an error while working on a problem, can I do a stack trace on that? How does this problem make me feel? What are the different kinds of boredom one can feel while doing math? All of these questions would get explored in meditative math.
Mediation about one’s relationship to math: Here the idea is to think about questions like: Why am I interested in math? What do I want to get out of it? What meaning does it give to my life? Why do I want to spend marginal time on math (rather than on other things)? If I had a lot more money, or a more satisfying social life, would I still be interested in doing math? How can I get better at math? What even does it mean to get better at math? Like, what are the different senses in which one can be “better at math”, and which ones do I care about and why? Why do I like certain pieces of math better than others, and why does someone else like some other piece of math better?
As the links above show, some of this already happens in bits and pieces, in a pretty solitary manner. I think it would be nice if there was a place where it could happen in a more concentrated way and where people could get together and talk about it as they are doing it.
Above I focused on how being at a mathematical monastery differs from regular mathematical practice. But it also differs from being at a monastery. For example, I don’t think a strict daily schedule will be an emphasis. I also imagine people would be talking to each other all the time, rather than silently meditating on their own.
Besides monasteries and cults, I think Recurse Center is the closest thing I know about. But my understanding is that Recurse Center has a more self-study/unschooling feel to it, rather than a “let’s focus on what our minds and emotions are doing with regard to programming” feel to it.
I don’t think there is anything too special about math here. There could probably be a “musical monastery” or “drawing monastery” or “video game design monastery” or whatever. Math just happens to be what I am interested in, and that’s the context in which these thoughts came to me.
What does “±8 relationships” mean? Is that a shorthand for 0±8, and if so, does that mean you’re giving the range 0-8, or are you also claiming you’ve potentially had a negative number of relationships (and if so what does that mean)? Or does it mean “8±n relationships”, for some value of n?
I collected more links a while back at https://causeprioritization.org/Eliezer_Yudkowsky_on_the_Great_Stagnation though most of it is not on LW so can’t be tagged.
Author’s note: this essay was originally published pseudonymously in 2017. It’s now being permanently rehosted at this link. I’ll be rehosting a small number of other upper-quintile essays from that era over the coming weeks.
Have you explained anywhere what brought you back to posting regularly on LessWrong/why you are now okay with hosting these essays on LessWrong? Did the problems you see with LessWrong get fixed in the time since when you deleted your old content? (I haven’t noticed any changes in the culture or moderation of LessWrong in that timeframe, so I am surprised to see you back.)
(I apologize if this comment is breaking some invisible Duncan rule about sticking to the object-level or something like that. Feel free to point me to a better place to ask my questions!)
I recently added some spaced repetition prompts to this essay so that while you read the essay you can answer questions, and if you sign up with the Orbit service you can also get email reminders to answer the prompts over time. Here’s my version with these prompts. (My version also has working footnotes.)
Thanks. I read the linked book review but the goals seem pretty different (automating teaching with the Digital Tutor vs trying to quickly distill and convey expert experience (without attempting to automate anything) with the stuff in Accelerated Expertise). My personal interest in “science of learning” stuff is to make self-study of math (and other technical subjects) more enjoyable/rewarding/efficient/effective, so the emphasis on automation was a key part of why the Digital Tutor caught my attention. I probably won’t read through Accelerated Expertise, but I would be curious if anyone else finds anything interesting there.
Robert Heaton calls this (or a similar enough idea) the Made-Up-Award Principle.
Maybe this? (There are a few subthreads on that post that mention linear regression.)