I am Issa Rice. https://issarice.com/
I think if the umbrella blog post on which the user’s shortform posts (which are just comments) get added was created before 2022-06-23 then it won’t have agree/disagree votes, whereas ones created on or after that date do?
If you’re pasting sensitive data such as a password or card number for regular entry of that password, consider other options such as using the browser autofill or a password manager.
Some password managers like KeePassXC automatically clear the clipboard after 10 seconds or when you close the program (whichever comes first).
Some stuff I’ve encountered that I mostly haven’t looked much into and haven’t really tried but seem potentially useful to me: heart rate variability biofeedback training, getting sunlight at specific times of day, photobiomodulation (e.g. Vielight), red light therapy, neurofeedback, transcranial magnetic stimulation, specific supplement regimes (example), green powders like Athletic Greens, certain kinds of meditation.
Agreed on epistemically questionable info. I’ve seen a range of canned advice including defeatist ones.
Lynette’s post was interesting because I think I also have something like POTS, but her post is very unlike something I would write myself, and I wouldn’t have found the post useful when I was starting out (I actually probably even read the post when it first came out and probably didn’t find it useful). I am puzzled at what this means for how generalizable people’s experiences are.
And thanks, I’d be interested in introductions to potential collaborators!
Agreed on the epistemic standards of random health groups, and yeah, I’d be interested in a Discord server. I am aware of this Facebook group, if you use Facebook, though it’s not very active.
I’ve been having a mysterious chronic health problem for the past several years and have learned a bunch of things that I wish I knew back when all of this started. I am thinking about how to write down what I’ve learned so others can benefit, but what’s tricky here is that while the knowledge I’ve gained seems wide-ranging, it’s also extremely specific to whatever my problems are, so I don’t know how well it generalizes to other people. I welcome suggestions on how to make my efforts more useful to others. I also welcome pointers to books/articles/posts that already discuss the stuff below in a competent way.
But anyway here is some stuff I could talk about:
Rationality lessons of mysterious health problems: certain health conditions (like mine) are quite mysterious, e.g. having no clear cause or shifting symptoms or nonspecific symptoms. This makes the health problem not only challenging on the basic suffering/emotional level, but also at an epistemic level. Some weird epistemic stuff happens when you are dealing with such a health problem, including:
Your “most likely diagnosis” will keep shifting or will have a wide distribution, which can be confusing to reason about (it’s almost as if the health problem is an agent diagonalizing against me). My “most likely diagnosis” has changed like five times.
Some mistakes I think I made trying to reason too literally about symptoms and ruled things out too early instead of just being like “ok maybe I have this thing” and then just trying the low-effort/safe interventions just to see if they help.
Weird interacting nature of symptoms: ignoring certain symptoms because they aren’t the most painful can end up being a bad idea because eliminating that symptom can help with a lot of other symptoms, because the mind/body is weird and interconnected.
I think turning to certain quacks is actually rational in the case of certain chronic illnesses. These quacks were never the first choice for the ill person, but after the conventional/established medicine’s interventions have all failed and established medicine basically shrugs and says “we don’t know what this even is”, and gives up on you, it makes sense to keep going anyway and try wackier things.
You need to do “rationality on hard mode”—when you’re stressed, when you have brain fog, when you have few productive hours in the day, when your emotions get all messed up.
There is a kind of “lawyery” thing you have to do, where you simulate the objections people will raise about things you should have done or things you should try, and you have to preempt all that and try it and be like “see? I already tried it” so that they don’t have easy outs.
How to deal with the health bureaucracy (US-specific, but what I know is even more specific): how to get the benefits you need from health providers, how to deal with insurance, how to get referrals, how to push providers with questions, optimizing which health insurance to have.
How to do health research: how to find information about symptoms, how to organize your research, how to ask good questions when meeting doctors, the importance of talking to a lot of people.
Specific things I’ve learned about different drugs, nootropics, health devices, practices, etc., and which ones seem the most promising.
General life outlook stuff:
How to orient toward “this being your new life”
How to stay motivated to live life and accomplish things while chronically ill; the hardcoreness of being ill for so long and what this does to your personality.
How to maintain a “health tracker”: how to keep track of your symptoms, what you did each day, what you ate, how you slept, etc. for future reference, and whether or not tracking any of this is useful.
Daily goal-setting: how to get shit done even if you feel like shit every day.
The importance of having a “health buddy” who has similar health problems who you can talk to all the time, as having a chronic health problem can be very isolating (very few people can understand or support you in the way you need).
The importance of just trying lots of things to see what helps, and what this looks like in practice.
Basic health stuff that seems good to do regardless of what the cause of your symptoms is: nutrition, exercise, sleep, wackier stuff.
Seems like you were right, and the Peter in question is Peter Eckersley. I just saw in this post:
The Alignment Problem is dedicated to him, after he convinced his friend Brian Christian of it.
That post did not link to a source, but I found this tweet where Brian Christian says:
His influence in both my intellectual and personal life is incalculable. I dedicated The Alignment Problem to him; I knew for many years that I would.
Did you end up running it through your internal infohazard review and if so what was the result?
You have my permission!
I see, thank you for the response!
I am curious what you think of my old comment here that I made on Anna’s post (some related discussion here).
For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or “correct course” based on student confusion. This ability to “use a knowledgeable human” in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably “copy” this knowledge to other humans? -- that’s the exposition problem. (This framing is basically Bloom’s 2 sigma problem.)
That’s very exciting to me! I personally study how science worked and failed historically and epistemic progress and vigilance in general to make alignment go faster and better, so I’ll be interested to discuss exposition as a science with you (and maybe give feedback on your follow-up posts if you want. ;) )
Cool! I just shared my draft post with you that goes into detail about the “exposition as science” strategy (ETA for everyone else: the post has now been published); if that post seems interesting to you, I’d be happy to discuss more with you (or you can just leave comments on the post if that is easier).
Doesn’t do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn’t do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I’m not understanding what you’re modeling as ridiculous.
My understanding of the history is that Eliezer did not realize the importance of alignment at first, and that he only did so later after arguing about it online with people like Nick Bostrom. See e.g. this thread. I don’t know enough of the history here, but it also seems logically possible that Bostrom could have, say, only realized the importance of alignment after conversing with other people who also didn’t realize the importance of alignment. In that case, there might be a “bubble” of humans who together satisfy the null string criterion, but no single human who does.
The null string criterion does seem a bit silly nowadays since I think the people who would have satisfied it would have sooner read about AI risk on e.g. LessWrong. So they wouldn’t even have the chance to live to age ~21 to see if they spontaneously invent the ideas.
With help from David Manheim, this post has now been turned into a paper. Thanks to everyone who commented on the post!
Would you say you are traumatized/did unschooling traumatize you/did attending the public high school and college traumatize you?