I’ve been having a mysterious chronic health problem for the past several years and have learned a bunch of things that I wish I knew back when all of this started. I am thinking about how to write down what I’ve learned so others can benefit, but what’s tricky here is that while the knowledge I’ve gained seems wide-ranging, it’s also extremely specific to whatever my problems are, so I don’t know how well it generalizes to other people. I welcome suggestions on how to make my efforts more useful to others. I also welcome pointers to books/articles/posts that already discuss the stuff below in a competent way.
But anyway here is some stuff I could talk about:
Rationality lessons of mysterious health problems: certain health conditions (like mine) are quite mysterious, e.g. having no clear cause or shifting symptoms or nonspecific symptoms. This makes the health problem not only challenging on the basic suffering/emotional level, but also at an epistemic level. Some weird epistemic stuff happens when you are dealing with such a health problem, including:
Your “most likely diagnosis” will keep shifting or will have a wide distribution, which can be confusing to reason about (it’s almost as if the health problem is an agent diagonalizing against me). My “most likely diagnosis” has changed like five times.
Some mistakes I think I made trying to reason too literally about symptoms and ruled things out too early instead of just being like “ok maybe I have this thing” and then just trying the low-effort/safe interventions just to see if they help.
Weird interacting nature of symptoms: ignoring certain symptoms because they aren’t the most painful can end up being a bad idea because eliminating that symptom can help with a lot of other symptoms, because the mind/body is weird and interconnected.
I think turning to certain quacks is actually rational in the case of certain chronic illnesses. These quacks were never the first choice for the ill person, but after the conventional/established medicine’s interventions have all failed and established medicine basically shrugs and says “we don’t know what this even is”, and gives up on you, it makes sense to keep going anyway and try wackier things.
You need to do “rationality on hard mode”—when you’re stressed, when you have brain fog, when you have few productive hours in the day, when your emotions get all messed up.
There is a kind of “lawyery” thing you have to do, where you simulate the objections people will raise about things you should have done or things you should try, and you have to preempt all that and try it and be like “see? I already tried it” so that they don’t have easy outs.
How to deal with the health bureaucracy (US-specific, but what I know is even more specific): how to get the benefits you need from health providers, how to deal with insurance, how to get referrals, how to push providers with questions, optimizing which health insurance to have.
How to do health research: how to find information about symptoms, how to organize your research, how to ask good questions when meeting doctors, the importance of talking to a lot of people.
Specific things I’ve learned about different drugs, nootropics, health devices, practices, etc., and which ones seem the most promising.
General life outlook stuff:
How to orient toward “this being your new life”
How to stay motivated to live life and accomplish things while chronically ill; the hardcoreness of being ill for so long and what this does to your personality.
How to maintain a “health tracker”: how to keep track of your symptoms, what you did each day, what you ate, how you slept, etc. for future reference, and whether or not tracking any of this is useful.
Productivity hacks:
Daily goal-setting: how to get shit done even if you feel like shit every day.
The importance of having a “health buddy” who has similar health problems who you can talk to all the time, as having a chronic health problem can be very isolating (very few people can understand or support you in the way you need).
The importance of just trying lots of things to see what helps, and what this looks like in practice.
Basic health stuff that seems good to do regardless of what the cause of your symptoms is: nutrition, exercise, sleep, wackier stuff.
This all seems great honestly, I would love if there were more posts about this kind of thing. I’m especially into the rationality lessons angle (first bullet point), but the rest all seems useful too.
I’ve seen a lot of people face this situation and have to figure it out from scratch, and I don’t think much has been written about this kind of thing on LessWrong (though there is this). Sure lots has been written about it in general / not on LessWrong, but I found the vast majority of that to be extremely epistemically questionable, and/or to be really defeatist, like, “just accept that you will spend the remaining decades of your life entirely bed-ridden”.
I would say that I’d be interested in collaborating on a sequence about this, but I am already way overcommitted. But I could ask some rationalist friends who have gone through this, if you wanted collaborators.
Agreed on epistemically questionable info. I’ve seen a range of canned advice including defeatist ones.
Lynette’s post was interesting because I think I also have something like POTS, but her post is very unlike something I would write myself, and I wouldn’t have found the post useful when I was starting out (I actually probably even read the post when it first came out and probably didn’t find it useful). I am puzzled at what this means for how generalizable people’s experiences are.
And thanks, I’d be interested in introductions to potential collaborators!
I’m also dealing with chronic illness and can relate to everything you listed. I’ve been thinking that a discord server specifically for people with chronic illness in the rationality community might be helpful to make it easier for us to share notes and help each other. There are different discord servers for various conditions unaffiliated with the rationality community, but they tend to not have great epistemic standards and generally have a different approach than what I’m looking for. Do you have any interest in a discord server?
Agreed on the epistemic standards of random health groups, and yeah, I’d be interested in a Discord server. I am aware of this Facebook group, if you use Facebook, though it’s not very active.
Some stuff I’ve encountered that I mostly haven’t looked much into and haven’t really tried but seem potentially useful to me: heart rate variability biofeedback training, getting sunlight at specific times of day, photobiomodulation (e.g. Vielight), red light therapy, neurofeedback, transcranial magnetic stimulation, specific supplement regimes (example), green powders like Athletic Greens, certain kinds of meditation.
I think Discord servers based around specific books are an underappreciated form of academic support/community. I have been part of such a Discord server (for Terence Tao’s Analysis) for a few years now and have really enjoyed being a part of it.
Each chapter of the book gets two channels: one to discuss the reading material in that chapter, and one to discuss the exercises in that chapter. There are also channels for general discussion, introductions, and a few other things.
Such a Discord server has elements of university courses, Math Stack Exchange, Reddit, independent study groups, and random blog posts, but is different from all of them:
Unlike courses (but like Math SE, Reddit, and independent study groups), all participation is voluntary so the people in the community are selected for actually being interested in the material.
Unlike Math SE and Reddit (but like courses and independent study groups), one does not need to laboriously set the context each time one wants to ask a question or talk about something. It’s possible to just say “the second paragraph on page 76” or “Proposition 6.4.12(c)” and expect to be understood, because there is common knowledge of what the material is and the fact that everyone there has access to that material. In a subject like real analysis where there are many ways to develop the material, this is a big plus.
Unlike independent study groups and courses (but like Math SE and Reddit), there is no set pace or requirement to join the study group at a specific point in time. This means people can just show up whenever they start working on the book without worrying that they are behind and need to catch up to the discussion, because there is no single place in the book everyone is at. This also makes this kind of Discord server easier to set up because it does not require finding someone else who is studying the material at the same time, so there is less cost to coordination.
Unlike random forum/blog posts about the book, a dedicated Discord server can comprehensively cover the entire book and has the potential to be “alive/motivating” (it’s pretty demotivating to have a question about a blog post which was written years ago and where the author probably won’t respond; I think reliability is important for making it seem safe/motivating to ask questions).
I also like that Discord has an informal feel to it (less friction to just ask a question) and can be both synchronous and asynchronous.
I think these Discord servers aren’t that hard to set up and maintain. As long as there is one person there who has worked through the entire book, the server won’t seem “dead” and it should accumulate more users. (What’s the motivation for staying in the server if you’ve worked through the whole book? I think it provides a nice review/repetition of the material.) I’ve also noticed that earlier on I had to answer more questions in early chapters of the book, but now there are more people who’ve worked through the early chapters who can answer those questions, so I tend to focus on the later chapters now. So my concrete proposal is that more people, when they finish working through a book, should try to “adopt” the book by creating a Discord server and fielding questions from people who are still working through the book (and then advertising in some standard channels like a relevant subreddit). This requires little coordination ability (everyone from the second person onward selfishly benefits by joining the server and does not need to pay any costs).
I am uncertain how well this format would work for less technical books where there might not be a single answer to a question/a “ground truth” (which leaves room for people to give their opinions more).
(Thanks to people on the Tao Analysis Discord, especially pecfex for starting a discussion on the server about whether there are any similar servers, which gave me the idea to write this post, and Segun for creating the Tao Analysis Discord.)
Back in the 2010s, EAs spent a long time dunking on doctors for not having such a high impact (I’m going off memory here, but I think “instead of becoming a doctor, why don’t you do X instead” was a common career pitch). I basically mostly unreflectively agreed with these opinions for a long time, and still think that doctors have less impact compared to stuff like x-risk reduction. But after having more personal experience dealing with the medical world (3 primary care doctors, ~10 specialist doctors, 2 psychiatrists, 2 naturopaths, 3 therapists, 2 nutritionists/dieticians, 2 coaching type people, all in the last 4 years (I counted some people under multiple categories)), I think a really agenty/knowledgeable/capable doctor or therapist can actually have a huge impact on the world (just going by intuition of how many even healthy-seeming people have a lot of health problems that bring down their productivity a lot, how crippling it is to have a mysterious health problem like mine, etc; I haven’t actually tried crunching numbers). I think such a person is not likely to look like a typical doctor working in a hospital system though… probably more like a writer/researcher who also happens to do consultations with people.
If I had to rewrite the EA pitch for people who wanted to become doctors it would be something like “First think very hard about why you want to become a doctor, and if what you want is not specific to working in healthcare then maybe consider [list of common EA cause areas]. If you really want to work in healthcare though, that’s great, but please consider becoming this weirder thing that’s not quite a doctor, where first you learn a bunch of rationality/math/programming and then you learn as much as you can about medical stuff and then try to help people.”
The model for dunking on doctors was something like: there is a limited number of doctor positions, so even if the hypothetical best doctor ever chooses a different career, it will not mean fewer doctors; it will just mean that the second best doctor will take their place instead. But the second best doctor ever is also a very good doctor, so the difference in the outcome will be very small.
Now, I am not sure if I remember the argument correctly. But if I do, it is obviously flawed. Because not only the previous job of doctor#1 is now taken by doctor#2, but also the previous job of doctor#2 is now taken by doctor#3, etc. Until we reach the hypothetical limit, and the previous job of doctor#N is now taken by a person who previously wouldn’t get the license, but now they will become doctor#N+1. So the overall change for the field of medicine is losing doctor#1 and getting doctor#N+1 (and shifting the remaining doctors). The difference between doctor#1 (the best doctor ever) and doctor#N+1 (who barely gets the license), multiplied by the length of their careers, could indeed mean a difference of many lives saved. It is just not really visible, because all those lives are not saved at the same place, but distributed along the chain.
The same reasoning also applies to the effective altruists, of course. It’s just, there is no guarantee that the hypothetical best doctor ever will become the most impactful effective altruist ever. They might as well become a mediocre one.
Typographers focus almost exclusively on designing texts that are meant to be read linearly (and typography guidelines follow this as well, telling writers to limit line length, use a certain font size, etc.). But if you look at the actual stuff happening in the reader’s mind as they interact with a book or webpage, linear reading is only one of many possible ways of interacting with a text. In particular, searching for things, flipping around, cross-referencing, and other “movement” tasks are quite common. For such movement tasks, the standard typographic advice seems like a poor choice. Some websites, like the English Wikipedia until this year (example), seem to design for such movement tasks by making the font size smaller, line length longer, etc., but this runs into the opposite problem where if someone does want to linearly read an article on such a page, it will be harder to do so. Other websites, such as the 80,000 Hours podcast website (example), seem to come to a compromise by designing for both kinds of tasks simultaneously (but by doing so fits neither task perfectly). I propose that the typography of a page should dynamically change to match the reader’s current task. This may be a bit disorienting at first, but skilled readers would be able to have the best typography in any situation. I don’t have a great idea for how to implement this in practice (I welcome suggestions), but one naïve idea is to have a button at the corner of the page that can toggle between “absorbing/linear mode” and “movement mode” (and possibly other modes; I’m interested in hearing what other cognitive tasks should be prioritized by the design of a page).
Does life extension (without other technological progress to make the world in general safer) lead to more cautious life styles? The longer the expected years left, the more value there is in just staying alive compared to taking risks. Since death would mean missing out on all the positive experiences for the rest of one’s life, I think an expected value calculation would show that even a small risk is not worth taking. Does this mean all risks that don’t get magically fixed due to life extension (for example, activities like riding a motorcycle or driving on the highway seem risky even if we have life extension technology) are not worth taking? (There is the obvious exception where if one knows when one is going to die, then one can take more risks just like in a pre-life extension world as one reaches the end of one’s life.)
I haven’t thought about this much, and wouldn’t be surprised if I am making a silly error (in which case, I would appreciate having it pointed out to me!).
Suppose there’s a life extension treatment that resets someone to age 20. It’s readily available to most first world residents, with the usual methods of rationing. (wait lists for years in European countries, the usual insurance scam in the USA)
A rational human would yes, buy space in a bunker and do all of their work remotely. There would be many variations of commercially available bunkers and security products, and the recent pandemic has showed that many high value jobs can be worked remotely.
However, the life extension treatment doesn’t change the ‘human firmware’. Novel experiences and mates will still remain highly pleasurable. Staying in the bunker and experiencing life via screens will likely cause various problems, ameliorated to some degree with artificial means. (vr headsets, etc)
So there will be flocks of humans who keep taking risks, and they will do the majority of the dying. I think I read the average lifespan would still be about 3000 years, which seems like a large improvement over the present situation.
In addition, this would probably be just a temporary state of affairs. (‘a dreary few centuries’) Neural backups, remote bodies, deep dive VR—there are many technologies that would make it practical to go out in the world safely. And a survival advantage for those humans who have the neurological traits to be able to survive the bunker years.
But, yes, I also think that society would slowly push for cleaning up many of the risks we consider ‘acceptable’ now. Cars, guns that are not smart and can be fired accidentally, air pollution, electrical wiring and gas plumbing—we have a ton of infrastructure and devices where the risk is small...over short present human lifespans. Everything would need to be a lot safer if we had expectations of thousands of years otherwise.
A while ago a PDF article was posted in the EA space (written by people who are pretty deep into EA) which used the Computer Modern font (the default font used in LaTeX) but which was clearly created using Microsoft Word. The cynical interpretation is that (on some level) the authors wanted to deceive readers into thinking that LaTeX was used to typeset the paper when in fact it was not. I do believe such deception will work, because very few people seem to know anything about typography. (I don’t claim to be much better; I’ve learned just a little bit more than the default state of zero knowledge.) I wonder how people feel about this sort of thing.
I vote for “it’s fine”. Maybe the authors just like how that font looks. And if somebody is judging a paper based on how it was typeset, I think that’s stupid and I don’t care if they get the wrong answer.
judging a paper based on [...] I don’t care if they get the wrong answer
There are things that influence the decision to take a look at all, and riceissa’s impression might be about damaging that game in a way that makes it less useful. If a paper is crafted out of newspaper clippings and liberally highlighted with markers, that’s not zero evidence, even as it’s screened off by the actual content.
(I have only given this a little thought, so wouldn’t be surprised if it is totally wrong. I’m curious to hear what people think.)
I’ve known about deductive vs inductive reasoning for a long time, but only recently heard about abductive reasoning. It now occurs to me that what we call “Solomonoff induction” might better be called “Solomonoff abduction”. From SEP:
It suggests that the best way to distinguish between induction and abduction is this: both are ampliative, meaning that the conclusion goes beyond what is (logically) contained in the premises (which is why they are non-necessary inferences), but in abduction there is an implicit or explicit appeal to explanatory considerations, whereas in induction there is not; in induction, there is only an appeal to observed frequencies or statistics.
In Solomonoff induction, we explicitly refer to the “world programs” that provide explanations for the sequence of bits that we observe, so according to the above criterion it fits under abduction rather than induction.
I found this Wikipedia article pretty interesting. Even in a supposedly copyright-maximalist country like the US, the font shapes themselves cannot be copyrighted, and design patents only last 15 years. Popular fonts like Helvetica have clones available for free. Other countries like Japan are similar, even though a full Japanese font requires designing 50,000+ glyphs! That is an insane amount of work that someone else can just take by copying all the shapes and repackaging it as a free font. In my experience there are only like a few main Japanese fonts, and I used to think it was just because it takes so much work to design such fonts, but now it occurs to me that the inability to make money from the design (because someone else can easily steal your designs) could be the bigger factor. (I have not yet done the virtuous thing of digging in to see if this is true.)
Lately I have been daydreaming about a mathematical monastery. I don’t know how coherent the idea is, and would be curious to hear feedback.
A mathematical monastery is a physical space where people gather to do a particular kind of math. The two main activities taking place in a mathematical monastery are meditative math and meditation about one’s relationship to math.
Meditative math: I think a lot of math that people do happens in a fast-paced and unreflective way. What I mean by this is that people solve a bunch of exercises, and then move on quickly to the next thing. There is a rush to finish the problem set or textbook or course and to progress to the main theorems or a more advanced course or the frontier of knowledge so that one might add to it. I think all of this can be good. But sometimes it’s nice to slow way down, to focus on the basics, or pay attention to how one’s mind is representing the mathematical object, or pay attention to how one just solved a problem. What associations did my mind make? Can I write down a stream-of-consciousness log of how I solved a problem? Did I get a gut sense of how long a problem would take me, and how reliable was that gut sense? Are the pictures I see in my head the same as the ones you see in yours? How did the first person who figured this out do so, and what was going on in their mind? Or how might someone have discovered this, even if it is not historically accurate? If I make an error while working on a problem, can I do a stack trace on that? How does this problem make me feel? What are the different kinds of boredom one can feel while doing math? All of these questions would get explored in meditative math.
Mediation about one’s relationship to math: Here the idea is to think about questions like: Why am I interested in math? What do I want to get out of it? What meaning does it give to my life? Why do I want to spend marginal time on math (rather than on other things)? If I had a lot more money, or a more satisfying social life, would I still be interested in doing math? How can I get better at math? What even does it mean to get better at math? Like, what are the different senses in which one can be “better at math”, and which ones do I care about and why? Why do I like certain pieces of math better than others, and why does someone else like some other piece of math better?
As the links above show, some of this already happens in bits and pieces, in a pretty solitary manner. I think it would be nice if there was a place where it could happen in a more concentrated way and where people could get together and talk about it as they are doing it.
Above I focused on how being at a mathematical monastery differs from regular mathematical practice. But it also differs from being at a monastery. For example, I don’t think a strict daily schedule will be an emphasis. I also imagine people would be talking to each other all the time, rather than silently meditating on their own.
Besides monasteries and cults, I think Recurse Center is the closest thing I know about. But my understanding is that Recurse Center has a more self-study/unschooling feel to it, rather than a “let’s focus on what our minds and emotions are doing with regard to programming” feel to it.
I don’t think there is anything too special about math here. There could probably be a “musical monastery” or “drawing monastery” or “video game design monastery” or whatever. Math just happens to be what I am interested in, and that’s the context in which these thoughts came to me.
I’ve been having a mysterious chronic health problem for the past several years and have learned a bunch of things that I wish I knew back when all of this started. I am thinking about how to write down what I’ve learned so others can benefit, but what’s tricky here is that while the knowledge I’ve gained seems wide-ranging, it’s also extremely specific to whatever my problems are, so I don’t know how well it generalizes to other people. I welcome suggestions on how to make my efforts more useful to others. I also welcome pointers to books/articles/posts that already discuss the stuff below in a competent way.
But anyway here is some stuff I could talk about:
Rationality lessons of mysterious health problems: certain health conditions (like mine) are quite mysterious, e.g. having no clear cause or shifting symptoms or nonspecific symptoms. This makes the health problem not only challenging on the basic suffering/emotional level, but also at an epistemic level. Some weird epistemic stuff happens when you are dealing with such a health problem, including:
Your “most likely diagnosis” will keep shifting or will have a wide distribution, which can be confusing to reason about (it’s almost as if the health problem is an agent diagonalizing against me). My “most likely diagnosis” has changed like five times.
Some mistakes I think I made trying to reason too literally about symptoms and ruled things out too early instead of just being like “ok maybe I have this thing” and then just trying the low-effort/safe interventions just to see if they help.
Weird interacting nature of symptoms: ignoring certain symptoms because they aren’t the most painful can end up being a bad idea because eliminating that symptom can help with a lot of other symptoms, because the mind/body is weird and interconnected.
I think turning to certain quacks is actually rational in the case of certain chronic illnesses. These quacks were never the first choice for the ill person, but after the conventional/established medicine’s interventions have all failed and established medicine basically shrugs and says “we don’t know what this even is”, and gives up on you, it makes sense to keep going anyway and try wackier things.
You need to do “rationality on hard mode”—when you’re stressed, when you have brain fog, when you have few productive hours in the day, when your emotions get all messed up.
There is a kind of “lawyery” thing you have to do, where you simulate the objections people will raise about things you should have done or things you should try, and you have to preempt all that and try it and be like “see? I already tried it” so that they don’t have easy outs.
How to deal with the health bureaucracy (US-specific, but what I know is even more specific): how to get the benefits you need from health providers, how to deal with insurance, how to get referrals, how to push providers with questions, optimizing which health insurance to have.
How to do health research: how to find information about symptoms, how to organize your research, how to ask good questions when meeting doctors, the importance of talking to a lot of people.
Specific things I’ve learned about different drugs, nootropics, health devices, practices, etc., and which ones seem the most promising.
General life outlook stuff:
How to orient toward “this being your new life”
How to stay motivated to live life and accomplish things while chronically ill; the hardcoreness of being ill for so long and what this does to your personality.
How to maintain a “health tracker”: how to keep track of your symptoms, what you did each day, what you ate, how you slept, etc. for future reference, and whether or not tracking any of this is useful.
Productivity hacks:
Daily goal-setting: how to get shit done even if you feel like shit every day.
The importance of having a “health buddy” who has similar health problems who you can talk to all the time, as having a chronic health problem can be very isolating (very few people can understand or support you in the way you need).
The importance of just trying lots of things to see what helps, and what this looks like in practice.
Basic health stuff that seems good to do regardless of what the cause of your symptoms is: nutrition, exercise, sleep, wackier stuff.
This all seems great honestly, I would love if there were more posts about this kind of thing. I’m especially into the rationality lessons angle (first bullet point), but the rest all seems useful too.
I’ve seen a lot of people face this situation and have to figure it out from scratch, and I don’t think much has been written about this kind of thing on LessWrong (though there is this). Sure lots has been written about it in general / not on LessWrong, but I found the vast majority of that to be extremely epistemically questionable, and/or to be really defeatist, like, “just accept that you will spend the remaining decades of your life entirely bed-ridden”.
I would say that I’d be interested in collaborating on a sequence about this, but I am already way overcommitted. But I could ask some rationalist friends who have gone through this, if you wanted collaborators.
Agreed on epistemically questionable info. I’ve seen a range of canned advice including defeatist ones.
Lynette’s post was interesting because I think I also have something like POTS, but her post is very unlike something I would write myself, and I wouldn’t have found the post useful when I was starting out (I actually probably even read the post when it first came out and probably didn’t find it useful). I am puzzled at what this means for how generalizable people’s experiences are.
And thanks, I’d be interested in introductions to potential collaborators!
I’m also dealing with chronic illness and can relate to everything you listed. I’ve been thinking that a discord server specifically for people with chronic illness in the rationality community might be helpful to make it easier for us to share notes and help each other. There are different discord servers for various conditions unaffiliated with the rationality community, but they tend to not have great epistemic standards and generally have a different approach than what I’m looking for. Do you have any interest in a discord server?
Agreed on the epistemic standards of random health groups, and yeah, I’d be interested in a Discord server. I am aware of this Facebook group, if you use Facebook, though it’s not very active.
Somewhat related, though different in various ways, is this post by Bryan Caplan: https://www.econlib.org/the-cause-of-what-i-feel-is-what-i-do-how-i-eliminate-pain/
Yes. I have something like me cfs and all you said resonate well.
mind elaborating?
Some stuff I’ve encountered that I mostly haven’t looked much into and haven’t really tried but seem potentially useful to me: heart rate variability biofeedback training, getting sunlight at specific times of day, photobiomodulation (e.g. Vielight), red light therapy, neurofeedback, transcranial magnetic stimulation, specific supplement regimes (example), green powders like Athletic Greens, certain kinds of meditation.
I think Discord servers based around specific books are an underappreciated form of academic support/community. I have been part of such a Discord server (for Terence Tao’s Analysis) for a few years now and have really enjoyed being a part of it.
Each chapter of the book gets two channels: one to discuss the reading material in that chapter, and one to discuss the exercises in that chapter. There are also channels for general discussion, introductions, and a few other things.
Such a Discord server has elements of university courses, Math Stack Exchange, Reddit, independent study groups, and random blog posts, but is different from all of them:
Unlike courses (but like Math SE, Reddit, and independent study groups), all participation is voluntary so the people in the community are selected for actually being interested in the material.
Unlike Math SE and Reddit (but like courses and independent study groups), one does not need to laboriously set the context each time one wants to ask a question or talk about something. It’s possible to just say “the second paragraph on page 76” or “Proposition 6.4.12(c)” and expect to be understood, because there is common knowledge of what the material is and the fact that everyone there has access to that material. In a subject like real analysis where there are many ways to develop the material, this is a big plus.
Unlike independent study groups and courses (but like Math SE and Reddit), there is no set pace or requirement to join the study group at a specific point in time. This means people can just show up whenever they start working on the book without worrying that they are behind and need to catch up to the discussion, because there is no single place in the book everyone is at. This also makes this kind of Discord server easier to set up because it does not require finding someone else who is studying the material at the same time, so there is less cost to coordination.
Unlike random forum/blog posts about the book, a dedicated Discord server can comprehensively cover the entire book and has the potential to be “alive/motivating” (it’s pretty demotivating to have a question about a blog post which was written years ago and where the author probably won’t respond; I think reliability is important for making it seem safe/motivating to ask questions).
I also like that Discord has an informal feel to it (less friction to just ask a question) and can be both synchronous and asynchronous.
I think these Discord servers aren’t that hard to set up and maintain. As long as there is one person there who has worked through the entire book, the server won’t seem “dead” and it should accumulate more users. (What’s the motivation for staying in the server if you’ve worked through the whole book? I think it provides a nice review/repetition of the material.) I’ve also noticed that earlier on I had to answer more questions in early chapters of the book, but now there are more people who’ve worked through the early chapters who can answer those questions, so I tend to focus on the later chapters now. So my concrete proposal is that more people, when they finish working through a book, should try to “adopt” the book by creating a Discord server and fielding questions from people who are still working through the book (and then advertising in some standard channels like a relevant subreddit). This requires little coordination ability (everyone from the second person onward selfishly benefits by joining the server and does not need to pay any costs).
I am uncertain how well this format would work for less technical books where there might not be a single answer to a question/a “ground truth” (which leaves room for people to give their opinions more).
(Thanks to people on the Tao Analysis Discord, especially pecfex for starting a discussion on the server about whether there are any similar servers, which gave me the idea to write this post, and Segun for creating the Tao Analysis Discord.)
This is a pretty cool concept.
Back in the 2010s, EAs spent a long time dunking on doctors for not having such a high impact (I’m going off memory here, but I think “instead of becoming a doctor, why don’t you do X instead” was a common career pitch). I basically mostly unreflectively agreed with these opinions for a long time, and still think that doctors have less impact compared to stuff like x-risk reduction. But after having more personal experience dealing with the medical world (3 primary care doctors, ~10 specialist doctors, 2 psychiatrists, 2 naturopaths, 3 therapists, 2 nutritionists/dieticians, 2 coaching type people, all in the last 4 years (I counted some people under multiple categories)), I think a really agenty/knowledgeable/capable doctor or therapist can actually have a huge impact on the world (just going by intuition of how many even healthy-seeming people have a lot of health problems that bring down their productivity a lot, how crippling it is to have a mysterious health problem like mine, etc; I haven’t actually tried crunching numbers). I think such a person is not likely to look like a typical doctor working in a hospital system though… probably more like a writer/researcher who also happens to do consultations with people.
If I had to rewrite the EA pitch for people who wanted to become doctors it would be something like “First think very hard about why you want to become a doctor, and if what you want is not specific to working in healthcare then maybe consider [list of common EA cause areas]. If you really want to work in healthcare though, that’s great, but please consider becoming this weirder thing that’s not quite a doctor, where first you learn a bunch of rationality/math/programming and then you learn as much as you can about medical stuff and then try to help people.”
The model for dunking on doctors was something like: there is a limited number of doctor positions, so even if the hypothetical best doctor ever chooses a different career, it will not mean fewer doctors; it will just mean that the second best doctor will take their place instead. But the second best doctor ever is also a very good doctor, so the difference in the outcome will be very small.
Now, I am not sure if I remember the argument correctly. But if I do, it is obviously flawed. Because not only the previous job of doctor#1 is now taken by doctor#2, but also the previous job of doctor#2 is now taken by doctor#3, etc. Until we reach the hypothetical limit, and the previous job of doctor#N is now taken by a person who previously wouldn’t get the license, but now they will become doctor#N+1. So the overall change for the field of medicine is losing doctor#1 and getting doctor#N+1 (and shifting the remaining doctors). The difference between doctor#1 (the best doctor ever) and doctor#N+1 (who barely gets the license), multiplied by the length of their careers, could indeed mean a difference of many lives saved. It is just not really visible, because all those lives are not saved at the same place, but distributed along the chain.
The same reasoning also applies to the effective altruists, of course. It’s just, there is no guarantee that the hypothetical best doctor ever will become the most impactful effective altruist ever. They might as well become a mediocre one.
Typographers focus almost exclusively on designing texts that are meant to be read linearly (and typography guidelines follow this as well, telling writers to limit line length, use a certain font size, etc.). But if you look at the actual stuff happening in the reader’s mind as they interact with a book or webpage, linear reading is only one of many possible ways of interacting with a text. In particular, searching for things, flipping around, cross-referencing, and other “movement” tasks are quite common. For such movement tasks, the standard typographic advice seems like a poor choice. Some websites, like the English Wikipedia until this year (example), seem to design for such movement tasks by making the font size smaller, line length longer, etc., but this runs into the opposite problem where if someone does want to linearly read an article on such a page, it will be harder to do so. Other websites, such as the 80,000 Hours podcast website (example), seem to come to a compromise by designing for both kinds of tasks simultaneously (but by doing so fits neither task perfectly). I propose that the typography of a page should dynamically change to match the reader’s current task. This may be a bit disorienting at first, but skilled readers would be able to have the best typography in any situation. I don’t have a great idea for how to implement this in practice (I welcome suggestions), but one naïve idea is to have a button at the corner of the page that can toggle between “absorbing/linear mode” and “movement mode” (and possibly other modes; I’m interested in hearing what other cognitive tasks should be prioritized by the design of a page).
Does life extension (without other technological progress to make the world in general safer) lead to more cautious life styles? The longer the expected years left, the more value there is in just staying alive compared to taking risks. Since death would mean missing out on all the positive experiences for the rest of one’s life, I think an expected value calculation would show that even a small risk is not worth taking. Does this mean all risks that don’t get magically fixed due to life extension (for example, activities like riding a motorcycle or driving on the highway seem risky even if we have life extension technology) are not worth taking? (There is the obvious exception where if one knows when one is going to die, then one can take more risks just like in a pre-life extension world as one reaches the end of one’s life.)
I haven’t thought about this much, and wouldn’t be surprised if I am making a silly error (in which case, I would appreciate having it pointed out to me!).
There’s 2 factors here.
Suppose there’s a life extension treatment that resets someone to age 20. It’s readily available to most first world residents, with the usual methods of rationing. (wait lists for years in European countries, the usual insurance scam in the USA)
A rational human would yes, buy space in a bunker and do all of their work remotely. There would be many variations of commercially available bunkers and security products, and the recent pandemic has showed that many high value jobs can be worked remotely.
However, the life extension treatment doesn’t change the ‘human firmware’. Novel experiences and mates will still remain highly pleasurable. Staying in the bunker and experiencing life via screens will likely cause various problems, ameliorated to some degree with artificial means. (vr headsets, etc)
So there will be flocks of humans who keep taking risks, and they will do the majority of the dying. I think I read the average lifespan would still be about 3000 years, which seems like a large improvement over the present situation.
In addition, this would probably be just a temporary state of affairs. (‘a dreary few centuries’) Neural backups, remote bodies, deep dive VR—there are many technologies that would make it practical to go out in the world safely. And a survival advantage for those humans who have the neurological traits to be able to survive the bunker years.
But, yes, I also think that society would slowly push for cleaning up many of the risks we consider ‘acceptable’ now. Cars, guns that are not smart and can be fired accidentally, air pollution, electrical wiring and gas plumbing—we have a ton of infrastructure and devices where the risk is small...over short present human lifespans. Everything would need to be a lot safer if we had expectations of thousands of years otherwise.
A while ago a PDF article was posted in the EA space (written by people who are pretty deep into EA) which used the Computer Modern font (the default font used in LaTeX) but which was clearly created using Microsoft Word. The cynical interpretation is that (on some level) the authors wanted to deceive readers into thinking that LaTeX was used to typeset the paper when in fact it was not. I do believe such deception will work, because very few people seem to know anything about typography. (I don’t claim to be much better; I’ve learned just a little bit more than the default state of zero knowledge.) I wonder how people feel about this sort of thing.
I vote for “it’s fine”. Maybe the authors just like how that font looks. And if somebody is judging a paper based on how it was typeset, I think that’s stupid and I don’t care if they get the wrong answer.
There are things that influence the decision to take a look at all, and riceissa’s impression might be about damaging that game in a way that makes it less useful. If a paper is crafted out of newspaper clippings and liberally highlighted with markers, that’s not zero evidence, even as it’s screened off by the actual content.
(I have only given this a little thought, so wouldn’t be surprised if it is totally wrong. I’m curious to hear what people think.)
I’ve known about deductive vs inductive reasoning for a long time, but only recently heard about abductive reasoning. It now occurs to me that what we call “Solomonoff induction” might better be called “Solomonoff abduction”. From SEP:
In Solomonoff induction, we explicitly refer to the “world programs” that provide explanations for the sequence of bits that we observe, so according to the above criterion it fits under abduction rather than induction.
I found this Wikipedia article pretty interesting. Even in a supposedly copyright-maximalist country like the US, the font shapes themselves cannot be copyrighted, and design patents only last 15 years. Popular fonts like Helvetica have clones available for free. Other countries like Japan are similar, even though a full Japanese font requires designing 50,000+ glyphs! That is an insane amount of work that someone else can just take by copying all the shapes and repackaging it as a free font. In my experience there are only like a few main Japanese fonts, and I used to think it was just because it takes so much work to design such fonts, but now it occurs to me that the inability to make money from the design (because someone else can easily steal your designs) could be the bigger factor. (I have not yet done the virtuous thing of digging in to see if this is true.)
Lately I have been daydreaming about a mathematical monastery. I don’t know how coherent the idea is, and would be curious to hear feedback.
A mathematical monastery is a physical space where people gather to do a particular kind of math. The two main activities taking place in a mathematical monastery are meditative math and meditation about one’s relationship to math.
Meditative math: I think a lot of math that people do happens in a fast-paced and unreflective way. What I mean by this is that people solve a bunch of exercises, and then move on quickly to the next thing. There is a rush to finish the problem set or textbook or course and to progress to the main theorems or a more advanced course or the frontier of knowledge so that one might add to it. I think all of this can be good. But sometimes it’s nice to slow way down, to focus on the basics, or pay attention to how one’s mind is representing the mathematical object, or pay attention to how one just solved a problem. What associations did my mind make? Can I write down a stream-of-consciousness log of how I solved a problem? Did I get a gut sense of how long a problem would take me, and how reliable was that gut sense? Are the pictures I see in my head the same as the ones you see in yours? How did the first person who figured this out do so, and what was going on in their mind? Or how might someone have discovered this, even if it is not historically accurate? If I make an error while working on a problem, can I do a stack trace on that? How does this problem make me feel? What are the different kinds of boredom one can feel while doing math? All of these questions would get explored in meditative math.
Mediation about one’s relationship to math: Here the idea is to think about questions like: Why am I interested in math? What do I want to get out of it? What meaning does it give to my life? Why do I want to spend marginal time on math (rather than on other things)? If I had a lot more money, or a more satisfying social life, would I still be interested in doing math? How can I get better at math? What even does it mean to get better at math? Like, what are the different senses in which one can be “better at math”, and which ones do I care about and why? Why do I like certain pieces of math better than others, and why does someone else like some other piece of math better?
As the links above show, some of this already happens in bits and pieces, in a pretty solitary manner. I think it would be nice if there was a place where it could happen in a more concentrated way and where people could get together and talk about it as they are doing it.
Above I focused on how being at a mathematical monastery differs from regular mathematical practice. But it also differs from being at a monastery. For example, I don’t think a strict daily schedule will be an emphasis. I also imagine people would be talking to each other all the time, rather than silently meditating on their own.
Besides monasteries and cults, I think Recurse Center is the closest thing I know about. But my understanding is that Recurse Center has a more self-study/unschooling feel to it, rather than a “let’s focus on what our minds and emotions are doing with regard to programming” feel to it.
I don’t think there is anything too special about math here. There could probably be a “musical monastery” or “drawing monastery” or “video game design monastery” or whatever. Math just happens to be what I am interested in, and that’s the context in which these thoughts came to me.