I am Issa Rice. https://issarice.com/
Would you say you are traumatized/did unschooling traumatize you/did attending the public high school and college traumatize you?
Do you have a sense of where your anxiety/distractability/”minor mental health problems” came from?
What was the chain of events leading up to you discovering LessWrong/the rationality community?
Vipul Naik has discovered that Alfred Marshall had basically the same idea (he even used the phrase “burn the mathematics”!) way back in 1906 (!), although he only described the procedure as a way to do economics research, rather than for decision-making. I’ve edited the wiki page to incorporate this information.
Thanks, I have added the quote to the page.
Lately I have been daydreaming about a mathematical monastery. I don’t know how coherent the idea is, and would be curious to hear feedback.
A mathematical monastery is a physical space where people gather to do a particular kind of math. The two main activities taking place in a mathematical monastery are meditative math and meditation about one’s relationship to math.
Meditative math: I think a lot of math that people do happens in a fast-paced and unreflective way. What I mean by this is that people solve a bunch of exercises, and then move on quickly to the next thing. There is a rush to finish the problem set or textbook or course and to progress to the main theorems or a more advanced course or the frontier of knowledge so that one might add to it. I think all of this can be good. But sometimes it’s nice to slow way down, to focus on the basics, or pay attention to how one’s mind is representing the mathematical object, or pay attention to how one just solved a problem. What associations did my mind make? Can I write down a stream-of-consciousness log of how I solved a problem? Did I get a gut sense of how long a problem would take me, and how reliable was that gut sense? Are the pictures I see in my head the same as the ones you see in yours? How did the first person who figured this out do so, and what was going on in their mind? Or how might someone have discovered this, even if it is not historically accurate? If I make an error while working on a problem, can I do a stack trace on that? How does this problem make me feel? What are the different kinds of boredom one can feel while doing math? All of these questions would get explored in meditative math.
Mediation about one’s relationship to math: Here the idea is to think about questions like: Why am I interested in math? What do I want to get out of it? What meaning does it give to my life? Why do I want to spend marginal time on math (rather than on other things)? If I had a lot more money, or a more satisfying social life, would I still be interested in doing math? How can I get better at math? What even does it mean to get better at math? Like, what are the different senses in which one can be “better at math”, and which ones do I care about and why? Why do I like certain pieces of math better than others, and why does someone else like some other piece of math better?
As the links above show, some of this already happens in bits and pieces, in a pretty solitary manner. I think it would be nice if there was a place where it could happen in a more concentrated way and where people could get together and talk about it as they are doing it.
Above I focused on how being at a mathematical monastery differs from regular mathematical practice. But it also differs from being at a monastery. For example, I don’t think a strict daily schedule will be an emphasis. I also imagine people would be talking to each other all the time, rather than silently meditating on their own.
Besides monasteries and cults, I think Recurse Center is the closest thing I know about. But my understanding is that Recurse Center has a more self-study/unschooling feel to it, rather than a “let’s focus on what our minds and emotions are doing with regard to programming” feel to it.
I don’t think there is anything too special about math here. There could probably be a “musical monastery” or “drawing monastery” or “video game design monastery” or whatever. Math just happens to be what I am interested in, and that’s the context in which these thoughts came to me.
What does “±8 relationships” mean? Is that a shorthand for 0±8, and if so, does that mean you’re giving the range 0-8, or are you also claiming you’ve potentially had a negative number of relationships (and if so what does that mean)? Or does it mean “8±n relationships”, for some value of n?
I collected more links a while back at https://causeprioritization.org/Eliezer_Yudkowsky_on_the_Great_Stagnation though most of it is not on LW so can’t be tagged.
Author’s note: this essay was originally published pseudonymously in 2017. It’s now being permanently rehosted at this link. I’ll be rehosting a small number of other upper-quintile essays from that era over the coming weeks.
Have you explained anywhere what brought you back to posting regularly on LessWrong/why you are now okay with hosting these essays on LessWrong? Did the problems you see with LessWrong get fixed in the time since when you deleted your old content? (I haven’t noticed any changes in the culture or moderation of LessWrong in that timeframe, so I am surprised to see you back.)
(I apologize if this comment is breaking some invisible Duncan rule about sticking to the object-level or something like that. Feel free to point me to a better place to ask my questions!)
I recently added some spaced repetition prompts to this essay so that while you read the essay you can answer questions, and if you sign up with the Orbit service you can also get email reminders to answer the prompts over time. Here’s my version with these prompts. (My version also has working footnotes.)
Thanks. I read the linked book review but the goals seem pretty different (automating teaching with the Digital Tutor vs trying to quickly distill and convey expert experience (without attempting to automate anything) with the stuff in Accelerated Expertise). My personal interest in “science of learning” stuff is to make self-study of math (and other technical subjects) more enjoyable/rewarding/efficient/effective, so the emphasis on automation was a key part of why the Digital Tutor caught my attention. I probably won’t read through Accelerated Expertise, but I would be curious if anyone else finds anything interesting there.
Robert Heaton calls this (or a similar enough idea) the Made-Up-Award Principle.
Maybe this? (There are a few subthreads on that post that mention linear regression.)
I think Discord servers based around specific books are an underappreciated form of academic support/community. I have been part of such a Discord server (for Terence Tao’s Analysis) for a few years now and have really enjoyed being a part of it.
Each chapter of the book gets two channels: one to discuss the reading material in that chapter, and one to discuss the exercises in that chapter. There are also channels for general discussion, introductions, and a few other things.
Such a Discord server has elements of university courses, Math Stack Exchange, Reddit, independent study groups, and random blog posts, but is different from all of them:
Unlike courses (but like Math SE, Reddit, and independent study groups), all participation is voluntary so the people in the community are selected for actually being interested in the material.
Unlike Math SE and Reddit (but like courses and independent study groups), one does not need to laboriously set the context each time one wants to ask a question or talk about something. It’s possible to just say “the second paragraph on page 76” or “Proposition 6.4.12(c)” and expect to be understood, because there is common knowledge of what the material is and the fact that everyone there has access to that material. In a subject like real analysis where there are many ways to develop the material, this is a big plus.
Unlike independent study groups and courses (but like Math SE and Reddit), there is no set pace or requirement to join the study group at a specific point in time. This means people can just show up whenever they start working on the book without worrying that they are behind and need to catch up to the discussion, because there is no single place in the book everyone is at. This also makes this kind of Discord server easier to set up because it does not require finding someone else who is studying the material at the same time, so there is less cost to coordination.
Unlike random forum/blog posts about the book, a dedicated Discord server can comprehensively cover the entire book and has the potential to be “alive/motivating” (it’s pretty demotivating to have a question about a blog post which was written years ago and where the author probably won’t respond; I think reliability is important for making it seem safe/motivating to ask questions).
I also like that Discord has an informal feel to it (less friction to just ask a question) and can be both synchronous and asynchronous.
I think these Discord servers aren’t that hard to set up and maintain. As long as there is one person there who has worked through the entire book, the server won’t seem “dead” and it should accumulate more users. (What’s the motivation for staying in the server if you’ve worked through the whole book? I think it provides a nice review/repetition of the material.) I’ve also noticed that earlier on I had to answer more questions in early chapters of the book, but now there are more people who’ve worked through the early chapters who can answer those questions, so I tend to focus on the later chapters now. So my concrete proposal is that more people, when they finish working through a book, should try to “adopt” the book by creating a Discord server and fielding questions from people who are still working through the book (and then advertising in some standard channels like a relevant subreddit). This requires little coordination ability (everyone from the second person onward selfishly benefits by joining the server and does not need to pay any costs).
I am uncertain how well this format would work for less technical books where there might not be a single answer to a question/a “ground truth” (which leaves room for people to give their opinions more).
(Thanks to people on the Tao Analysis Discord, especially pecfex for starting a discussion on the server about whether there are any similar servers, which gave me the idea to write this post, and Segun for creating the Tao Analysis Discord.)
I learned about the abundance of available resources this past spring.
I’m curious what this is referring to.
Rob, are you able to disclose why people at Open Phil are interested in learning more decision theory? It seems a little far away from the AI strategy reports they’ve been publishing in recent years, and it also seemed like they were happy to keep funding MIRI (via their Committee for Effective Altruism Support) despite disagreements about the value of HRAD research, so the sudden interest in decision theory is intriguing.
I am also running into this problem now with the Markdown editor. I switched over from the new rich editor because that one didn’t support footnotes, whereas the Markdown one does. It seems like there is no editor that can both scale images and do footnotes, which is frustrating.
Edit: I ended up going with the rich editor despite broken footnotes since that seemed like the less bad of the two problems.
Re (a): I looked at chapters 4 and 5 of Superintelligence again, and I can kind of see what you mean, but I’m also frustrated that Bostrom seems really non-committal in the book. He lists a whole bunch of possibilities but then doesn’t seem to actually come out and give his mainline visualization/”median future”. For example he looks at historical examples of technology races and compares how much lag there was, which seems a lot like the kind of thinking you are doing, but then he also says things like “For example, if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even touching the intermediary rungs.” which sounds like the deep math view. Another relevant quote:
Building a seed AI might require insights and algorithms developed over many decades by the scientific community around the world. But it is possible that the last critical breakthrough idea might come from a single individual or a small group that succeeds in putting everything together. This scenario is less realistic for some AI architectures than others. A system that has a large number of parts that need to be tweaked and tuned to work effectively together, and then painstakingly loaded with custom-made cognitive content, is likely to require a larger project. But if a seed AI could be instantiated as a simple system, one whose construction depends only on getting a few basic principles right, then the feat might be within the reach of a small team or an individual. The likelihood of the final breakthrough being made by a small project increases if most previous progress in the field has been published in the open literature or made available as open source software.
Re (b): I don’t disagree with you here. (The only part that worries me is, I don’t have a good idea of what percentage of “AI safety people” shifted from one view to the other, whether were were also new people with different views coming in to the field, etc.) I realize the OP was mainly about failure scenarios, but it did also mention takeoffs (“takeoffs won’t be too fast”) and I was most curious about that part.
I was reading parts of Superintelligence recently for something unrelated and noticed that Bostrom makes many of the same points as this post:
If the frontrunner is an AI system, it could have attributes that make it easier for it to expand its capabilities while reducing the rate of diffusion. In human-run organizations, economies of scale are counteracted by bureaucratic inefficiencies and agency problems, including difficulties in keeping trade secrets. These problems would presumably limit the growth of a machine intelligence project so long as it is operated by humans. An AI system, however, might avoid some of these scale diseconomies, since the AI’s modules (in contrast to human workers) need not have individual preferences that diverge from those of the system as a whole. Thus, the AI system could avoid a sizeable chunk of the inefficiencies arising from agency problems in human enterprises. The same advantage—having perfectly loyal parts—would also make it easier for an AI system to pursue long-range clandestine goals. An AI would have no disgruntled employees ready to be poached by competitors or bribed into becoming informants.