Thanks for this.
I’m interested in figuring out more what’s going on here—how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you’re thinking of who had psychotic episodes?
I agree I’m being somewhat inconsistent, I’d rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I’m trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you’re open to that.
If this information isn’t too private, can you send it to me? email@example.com
I’ve posted an edit/update above after talking to Vassar.
Yes, I agree with you that all of this is very awkward.
I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.
But we have to admit at least small violations of it even to get the concept of “cult”. Not just the sort of weak cults we’re discussing here, but even the really strong cults like Heaven’s Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven’s Gate is bad for them, and leave. When we use the word “cult”, we’re implicitly agreeing that this doesn’t always work, and we’re bringing in creepier and less comprehensible ideas like “charisma” and “brainwashing” and “cognitive dissonance”.
(and the same thing with the concept of “emotionally abusive relationship”)
I don’t want to call the Vassarites a cult because I’m sure someone will confront me with a Cult Checklist that they don’t meet, but I think that it’s not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it’s weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I’m sure the drugs helped.
I think believing cults are possible is different in degree if not in kind from Leverage “doing seances...to call on demonic energies and use their power to affect the practitioners’ social standing”. I’m claiming, though I can’t prove it, that what I’m saying is more towards the “believing cults are possible” side.
I’m actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say “Oh, he’s in a cult, we need to kidnap and deprogram him since his best self wouldn’t agree with the deconversion.” I want to be extremely careful in when we do things like that, which is why I’m not actually “calling for isolating Michael Vassar from his friends”. I think in the Outside View we should almost never do this!
But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn’t just ignore.
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)[...]Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I more or less Outside View agree with you on this, which is why I don’t go around making call-out threads or demanding people ban Michael from the community or anything like that (I’m only talking about it now because I feel like it’s fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) “This guy makes people psychotic by talking to them” is a silly accusation to go around making, and I hate that I have to do it!
But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.
I think the minimum viable narrative here is, as you say, something like “Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs.” Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can’t trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the “he’s just having normal truth-seeking conversation” objection. He also seems really good at pushing trans people’s buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don’t know how it happens, I’m sufficiently embarrassed to be upset about something which looks like “having a nice interesting conversation” from the outside, and I don’t want to violate liberal norms that you’re allowed to have conversations—but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.
Maybe one analogy would be people with serial emotional abusive relationships—should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you’ve got to at least leave that possibility open for when things get really weird.
I don’t want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn’t harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I’m suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their “it’s correct to be freaking about learning your entire society is corrupt and gaslighting” shtick.
It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn’t publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.
Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.
I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.
My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient’s buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn’t want it we would explore why given the very high risk level, and if they still said they didn’t want it then I would follow their direction.
I didn’t get a chance to talk to you during your episode, so I don’t know exactly what was going on. I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, as more of a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I think in mild psychosis it’s possible to snap someone back to reality where they agree their weird thoughts aren’t true, but in severe psychosis it isn’t (I remember when I was a student I tried so hard to convince someone that they weren’t royalty, hours of passionate debate, and it just did nothing). I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway—you’re treating a symptom. Analogy to eg someone having chest pain from a heart attack, and you give them painkillers for the pain but don’t treat the heart attack.
(although there’s a separate point where it would be wrong and objectifying to falsely claim someone who’s just thinking differently is psychotic or pre-psychotic, given that you did end up psychotic it doesn’t sound like the people involved were making that mistake)
My impression is that some medium percent of psychotic episodes end in permanent reduced functioning, and some other medium percent end in suicide or jail or some other really negative consequence, and this is scary enough that treating it is always an emergency, and just treating the symptom but leaving the underlying condition is really risky.
I agree many psychiatrists are terrible and that wanting to avoid them is a really sympathetic desire, but when it’s something really serious like psychosis I think of this as like wanting to avoid surgeons (another medical profession with more than its share of jerks!) when you need an emergency surgery.
I want to add some context I think is important to this.
Jessica was (I don’t know if she still is) part of a group centered around a person named Vassar, informally dubbed “the Vassarites”. Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to “jailbreak” yourself from it (I’m using a term I found on Ziz’s discussion of her conversations with Vassar; I don’t know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.
Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don’t think he thinks they’re worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it’s especially galling that they’re just as bad). Since then, he’s tried to “jailbreak” a lot of people associated with MIRI and CFAR—again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird”). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.
(I am a psychiatrist and obviously biased here)
Jessica talks about a cluster of psychoses from 2017 − 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were “in the social circle” in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.
I don’t have hard evidence of all these points, but I think Jessica’s text kind of obliquely confirms some of them. She writes:
“Psychosis” doesn’t have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang’s work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.
RD Laing was a 1960s pseudoscientist who claimed that schizophrenia is how “the light [begins] to break through the cracks in our all-too-closed minds”. He opposed schizophrenics taking medication, and advocated treatments like “rebirthing therapy” where people role-play fetuses going through the birth canal—for which he was stripped of his medical license. The Vassarites like him, because he is on their side in the whole “actually psychosis is just people being enlightened as to the true nature of society” thing. I think Laing was wrong, psychosis is actually bad, and that the “actually psychosis is good sometimes” mindset is extremely related to the Vassarites causing all of these cases of psychosis.
Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have “auras” in a way that is not less inherently rigorous than the way in which different people have “charisma”, and I feared this type of comment would cause people to say I was crazy.) As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.
Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don’t want to assert that I am 100% sure this can never be true, I think it’s true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.
On the two cases of suicide, Jessica writes:
Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don’t think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)
Ziz tried to create an anti-CFAR/MIRI splinter group whose members had mental breakdowns. Jessica also tried to create an anti-CFAR/MIRI splinter group and had a mental breakdown. This isn’t a coincidence—Vassar tried his jailbreaking thing on both of them, and it tended to reliably produce people who started crusades against MIRI/CFAR, and who had mental breakdowns. Here’s an excerpt from Ziz’s blog on her experience (edited heavily for length, and slightly to protect the innocent):
When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.
[Vassar explained how] across society, the forces of gaslighting were attacking people’s basic ability to think and to a justice as a Schelling point until only the built-in Schelling points of gender and race remained, Vassar listed fronts in the war on gaslighting, disputes in the community, and included [local community member ZD] [...] ZD said Vassar broke them out of a mental hospital. I didn’t ask them how. But I considered that both badass and heroic. From what I hear, ZD was, probably as with most, imprisoned for no good reason, in some despicable act of, “get that unsightly person not playing along with the [heavily DRM’d] game we’ve called sanity out of my free world”.
I heard [local community member AM] was Vassar’s former “apprentice”. And I had started picking up jailbroken wisdom from them secondhand without knowing where it was from. But Vassar did it better. After Rationalist Fleet, I concluded I was probably worth Vassar’s time to talk to a bit, and I emailed him, carefully briefly stating my qualifications, in terms of ability to take ideas seriously and learn from him, so that he could get maximally dense VOI on whether to talk to me. A long conversation ensued. And I got a lot from it. [...]
Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve. And didn’t detransition. This all created an awful tension in me. The rationality community was kind of compromised as a rallying point for truthseeking. This was desperately bad for the world. [Vassar] was at the center of, largely the creator of a “no actually for real” rallying point for the jailbroken reality-not-social-reality version of this.
Ziz is describing the same cluster of psychoses Jessica is (including Jessica’s own), but I think doing so more accurately, by describing how it was a Vassar-related phenomenon. I would add Ziz herself to the list of trans women who got negative mental effects from Vassar, although I think (not sure) Ziz would not endorse my description of her as having these.
What was the community’s response to this? I have heard rumors that Vassar was fired from MIRI a long time ago for doing some very early version of this, although I don’t know if it’s true. He was banned from REACH (and implicitly rationalist social events) for somewhat unrelated reasons. I banned him from SSC meetups for a combination of reasons including these. For reasons I don’t fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything’s kind of been frozen in place since then.
I want to clarify that I don’t dislike Vassar, he’s actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He’s also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don’t think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of “the world is corrupt and traumatizing” which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people. My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.
EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We’re still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were—it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. I still suspect that Vassar unwittingly came up with some ideas that other people then spread through the graph. Vassar still denies this and is going to try to explain a more complete story to me when I have more time.
I’ve tried to address your point about psychiatry in particular at https://slatestarcodex.com/2019/12/04/symptom-condition-cause/
For the whale point, am I fairly interpreting your argument as saying that mammals are more similar, and more fundamentally similar, to each other, than swimmy-things? If so, consider a thought experiment. Swimmy-things are like each other because of convergent evolution. Presumably millions of years ago, the day after the separation of the whale and land-mammal lineages, proto-whales and proto-landmammals were extremely similar, and proto-whales and proto-fish were extremely dissimilar. Let’s say in 99% of ways, whales were more like landmammals, and in 1% of ways, they were more like fish. Some convergent evolution takes place, we get to the present, and you’re claiming that modern whales are still more like landmammals than fish—I have no interest in disputing that claim, let’s say they’re more like landmammals in 85% of ways, and fish in 15% of ways. Now fast-forward into the future, after a billion more years of convergent evolution, and imagine that whales have evolved to their new niche so well that they are more like fish in 99% of ways, and more like mammals in only 1% of ways. Are you still going to insist that blood is thicker than water and we need to judge them by their phylogenetic group, even though this gives almost no useful information and it’s almost always better to judge them by their environmental affinities?
(I don’t think this is an absurd hypothetical—I think “crabs” are in this situation right now)
And if not, at some point in the future, do they go from being obviously-mammals-you-are-not-allowed-to-argue-this to obviously-fish-you-are-not-allowed-to-argue-this in the space of a single day? Or might there be a very long period when they are more like mammals in some way, more like fish in others, and you’re allowed to categorize them however you want based on which is more useful for you? If the latter, what makes you think we’re not in that period right now?
This rubs me wrong for the same reason that “no evidence for...” claims rub me wrong.
We have a probably-correct model, the hygiene hypothesis broadly understood. We have a plausible corollary of that model, which is that kids eating dirt helps their immune system (I had never heard this particular claim before, but since you mention it, it seems like a plausible corollary). We should have a low-but-not-ridiculously-low prior on this.
(probably some people would say a high prior, since it follows naturally from a probably-true thing, but I don’t trust any multi-step chain of reasoning in medicine)
When I read the title, I thought “Oh! I guess someone showed the specific behavior of eating dirt doesn’t help, so I should update against the hygiene hypothesis!” But the post presents no evidence this is wrong. It’s just saying there are no studies of it.
This seems kind of like framing the proverbial parachute point as “‘Parachutes prevent falling injuries’” Is Basically Made Up”. It’s not made up! It was assigned a high prior based on other things we know! Nobody has given us any evidence for or against that prior, so we should stick to it.
Can you explain the no-loss competition idea further?
If you have to stake your USDC, isn’t this still locking up USDC, the thing you were trying to avoid doing?
What gives the game tokens value?
Thanks, I read that, and while I wouldn’t say I’m completely enlightened, I feel like I have a good basis for reading it a few more times until it sinks in.
I interpret you as saying in this post: there is no fundamental difference between base and noble motivations, they’re just two different kinds of plans we can come up with and evaluate, and we resolve conflicts between them by trying to find frames in which one or the other seems better. Noble motivations seem to “require more willpower” only because we often spend more time working on coming up with positive frames for them, because this activity flatters our ego and so is inherently rewarding.
I’m still not sure I agree with this. My own base motivation here is that I posted a somewhat different model of willpower at https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower , which is similar to yours except that it does keep a role for the difference between “base” and “noble” urges. I’m trying to figure out if I still want to defend it against this one, but my thoughts are something like:
- It feels like on stimulants, I have more “willpower” : it’s easy to take the “noble” choice when it might otherwise be hard. Likewise, when I’m drunk I have less ability to override base motivations with noble ones, and (although I guess I can’t prove it) this doesn’t seem like a purely cognitive effect where it’s harder for me to “remember” the important benefits of my noble motivations. The same is true of various low-energy states, eg tired, sick, stressed—I’m less likely to choose the noble motivation in all of them. This suggests to me that baser and nobler motivations are coming from different places, and stimulants strengthen (in your model) the connection between the noble-motivation-place and the striatum relative to the connection between the base-motivation-place the striatum, and alcohol/stress/etc weaken it.
- I’m skeptical of your explanation for the “asymmetry” of noble vs. base thoughts. Are thoughts about why I should stay home really less rewarding than thoughts about why I should go to the gym? I’m imagining the opposite—I imagine staying home in my nice warm bed, and this is a very pleasant thought, and accords with what I currently really want (to not go to the gym). On the other hand, thoughts about why I should go to the gym, if I were to verbalize them, would sound like “Ugh, I guess I have to consider the fat that I’ll be a fat slob if I don’t go, even though I wish I could just never have to think about that”.
- Base thoughts seem like literally animalistic desires—hunger seems basically built on top of the same kind of hunger a lizard or nematode feels. We know there are a bunch of brain areas in the hypothalamus etc that control hunger. So why shouldn’t this be ontologically different from nobler motivations that are different from lizards’? It seems perfectly sensible that eg stimulants strengthen something about the neocortex relative to whatever part of the hypothalamus is involved in hunger. I guess I’m realizing now how little I understand about hunger—surely the plan to eat must originate in the cortex like every other plan, but it sure feels like it’s tied into the hypothalamus in some really important way. I guess maybe hunger could have a plan-generator exactly like every other, which is modulated by hypothalamic connections? It still seems like “plans that need outside justification” vs. “plans that the hypothalamus will just keep active even if they’re stupid” is a potentially important dichotomy.
- Base motivations also seem like things which have a more concrete connection to reinforcement learning. There’s a really short reinforcement loop between “want to eat candy” and “wow, that was reinforcing”, and a really long (sometimes nonexistent) loop between going to the gym and anything good happening. Again, this makes me suspicious that the base motivations are “encoded” in some way that’s different from the nobler motivations and which explains why different substances can preferentially reinforce one relative to the other.
- The reasons for thinking of base motivations as more like priors, discussed in that post.
- Kind of a dumb objection, but this feels analogous to other problems where a conscious/intellectual knowledge fails to percolate to emotional centers of the brain, for example someone who knows planes are very safe but is scared of flying anyway. I’m not sure how to use your theory here to account for this situation, whereas if I had a theory that explained the plane phobia problem I feel like it would have to involve a concept of lower-level vs. higher-level systems that would be easy to plug into this problem.
- Another dumb anecdotal objection, but this isn’t how I consciously experience weakness of will. The example that comes to mind most easily is wanting to scratch an itch while meditating, even though I’m supposed to stay completely still. When I imagine my thought process while worrying about this, it doesn’t feel like trying to think up new reframings of the plan. It feels like some sensory region of the brain saying “HEY! ITCH! YOU SHOULD SCRATCH IT!” and my conscious brain trying to exert some effort to overcome that. The effort doesn’t feel like thinking of new framings, and the need for the effort persists long after every plausible new framing has been thought of. And it does seem relevant that “scratch itch” has no logical justification (it’s just a basic animal urge that would persist even if someone told you there was no biological cause of the itch and no way that not scratching it could hurt you), whereas wanting to meditate well has a long chain of logical explanations.
Can you link to an explanation of why you’re thinking of the brainstem as plan-evaluator? I always thought it was the basal ganglia.
Mental hospitals of the type I worked at when writing that post only keep patients for a few days, maybe a few weeks at tops. This means there’s no long-term constituency for fighting them, and the cost of errors is (comparatively) low.
The procedures for these hospitals would be hard to change. It’s hard to have a law like “you need a judge to approve sending someone to a mental hospital”, because maybe someone’s trying to kill themselves right now and the soonest a judge has an opening is three days from now. So the standard rule is “use your own judgment and a judge will review it in a week or two”, but most psychiatric cases resolve before then and never have to see a judge. In theory patients can sue doctors if they think they were being held improperly, but they almost never get around to doing this and when they do they almost never win, for a combination of “they’re usually wrong about the law and sometimes obviously insane” and “judges are biased towards doctors because they seem to know what they’re talking about”. Also, the law just got done instituting extremely severe and unpredictable punishments to any doctor who doesn’t commit someone to a mental hospital and then that person does anything bad ever, and the law has kindly decided not to be extremely severe on both sides.
There are other mental hospitals that keep people for months or years, but these do have very strict requirements for getting someone into them and are much more careful.
I have some patients on disulfiram and it works very well when they take it. The problem is definitely that they can choose not to take it if they want alcohol (or sometimes just forget for normal reasons, then opportunistically drink after they realize they’ve forgotten).
The implants are a great idea. As far as I know, the reason they’re not used is because someone would have to pay for lots and lots of studies and the economics don’t work out. Also because there are vague concerns about safety (if something went catastrophically wrong and the entire implant got released at once and then the patient drank, it would be potentially fatal) and ethics (should a realistically-probably-heavily-pressured patient be allowed to make decisions that bind their future selves)? I think this is dumb and we should just do the implant, but I don’t think it’s mysterious why we don’t, or why (in the absence of the implant) disulfiram doesn’t solve everything.