My hypothesis for why the psychosis thing is the case is that it has to do with drastic modification of self-image.
I’m interested in hearing more about the causes of this hypothesis. My own guess is that sudden changes to the self-image cause psychosis more than other sudden psychological change, but that all rapid psychological change will tend to cause it to some extent. I also share the prediction (or maybe for you it was an observation) that you wrote in our original thread: “It seems to be a lot worse if this modification was pushed on them to any degree. “
The reasons for my own prediction are:
1) My working model of psychosis is “lack of a stable/intact ego”, where my working model of an “ego” is “the thing you can use to predict your own actions so as to make successful multi-step plans, such as ‘I will buy pasta, so that I can make it on Thursday for our guests.’”
2) Self-image seems quite related to this sort of ego.
3) Nonetheless, recreational drugs of all sorts, such as alcohol seem to sometimes cause psychosis (not just psychedelics), so … I guess I tend to think that any old psychological change sometimes triggers psychosis.
3b) Also, if it’s true that reading philosophy books sometimes triggers psychosis (as I mentioned my friend’s psychiatrist saying, in the original thread), that seems to me probably better modeled by “change in how one parses the world” rather than by “change in self-image”? (not sure)
4) Relatedly, maybe: people say psychosis was at unusually low levels in England in WW2, perhaps because of the shared society-level meaning (“we are at war, we are on a team together, your work matters”). And you say your Mormon ward as a kid didn’t have much psychosis. I tend to think (but haven’t checked, and am not sure) that places with unusually coherent social fabric, and people who have strong ecology around them and have had a chance to build up their self-image slowly and in deep dialog with everything around them, would have relatively low psychosis, and that rapid psychological change of any sort (not only to the self-image) would tend to mess with this.
Epistemic status of all this: hobbyist speculation, nobody bet your mental health on it please.
The data informing my model came from researching AI psychosis cases, and specifically one in which the AI gradually guided a user into modifying his self image (disguised as self-discovery), explicitly instilling magical thinking into him (which appears to have worked). I have a long post about this case in the works, similar to my Parasitic AI post.
After I had the hypothesis, it “clicked” that it also explained past community incidents. I doubt I’m any more clued-in to rationalist gossip than you are. If you tell me that the incidence has gone down in recent years, I think I will believe you.
I feel tempted to patch my model to be about self-image vs self discrepancies upon hearing your model. I think it’s a good sign that yours is pretty similar! I don’t see why you think prediction of actions is relevant though.
Attempt at gears-level: phenomenal consciousness is the ~result of reflexive-empathy as applied to your self-image (which is of the same type as a model of your friend). So conscious perception depends on having this self-image update ~instantly to current sensations. When it changes rapidly it may fail to keep up. That explains the hallucinations. And when your model of someone changes quickly, you have instincts towards paranoia, or making hasty status updates. These still trigger when the self-image changes quickly, and then loopiness amplifies it. This explains the strong tendency towards paranoia (especially things like “voices inside my head telling me to do bad things”) or delusions of grandeur.
[this is a throwaway model, don’t take too seriously]
It seems like psychedelics are ~OOM worse than alcohol though, when thinking about base rates?
Hmm… I’m not sure that meaning is a particularly salient differences between mormons and rationalists to me. You could say both groups strive for bringing about a world where Goodness wins and people become masters of planetary-level resources. The community/social-fabric thing seems like the main difference to me (and would apply to WW2 England).
Hmm… I’m not sure that meaning is a particularly salient differences between mormons and rationalists to me. You could say both groups strive for bringing about a world where Goodness wins and people become masters of planetary-level resources. The community/social-fabric thing seems like the main difference to me (and would apply to WW2 England).
I mean, fair. But meaning in WW2 England is shared, supported, kept in many peoples’ heads so that if it goes a bit wonky in yours you can easily reload the standard version from everybody else, and it’s been debugged until it recommends fairly sane stable socially-accepted courses of action? And meaning around the rationalists is individual and variable.
The reason I expect things to be worse if the modification is pushed on a person to any degree, is because I figure our brains/minds often know what they’re doing, and have some sort of “healthy” process for changing that doesn’t usually involve a psychotic episode. It seems more likely to me that our brains/minds will get update in a way-that-causes-trouble if some outside force is pressuring or otherwise messing with them.
I don’t know how this plays out specifically in psychosis, but ascribing intentionality in general, and specifically ascribing adversariality, seems like an especially important dimension / phenomenon. (Cf. https://en.wikipedia.org/wiki/Ideas_and_delusions_of_reference )
Ascribing adversariality in particular might be especially prone to setting off a self-sustaining reaction.
Consider first that when you ascribe adversariality, things can get weird fast. Examples:
If Bob thinks Alice is secretly hostile towards Bob, trust breaks down. Propositional statements from Alice are interpreted as false, lies, or subtler manipulations with hidden intended effects.
This generally winds Bob up. Every little thing Alice says or does, if you take as given the (probably irrational) assumption of adversariality, would rationally give Bob good reason to spin up a bunch of computation looking for possible plans Alice is doing. This is first of all just really taxing for Bob, and distracting from more normal considerations. And second of all it’s a local bias, pointing Bob to think about negative outcomes; normally that’s fine, all attention-direction is a local bias, but since the situation (e.g. talking to Alice) is ongoing, Bob may not have time and resources to compute everything out so that he also thinks of, well maybe Alice’s behavior is just normal, or how can I test this sanely, or alternative hypotheses other than hostility from Alice, etc.
This cuts off flow of information from Alice to Bob.
This cuts off positive sum interactions between Alice and Bob; Bob second guesses every proposed truce, viewing it as a potential false peace.
Bob might start reversing the pushes that Alice is making, which could be rational on the supposition that Alice is being adversarial. But if Alice’s push wasn’t adversarial and you reverse it, then it might be self-harming. E.g. “She’s only telling me to try to get some sleep because she knows I’m on the verge of figuring out XYZ, I better definitely not sleep right now and keep working towards XYZ”.
Are they all good or all out to get me? If Bob thinks Alice is adversarial, and Alice is not adversarial, and Carmi and Danit are also not adversarial, then they look like Alice and so Bob might think they are adversarial.
And suppose, just suppose, that one person does do something kinda adversarial. Like suggest that maybe you really need to take some sort of stronger calming drug, or even see a doctor. Well, maybe that’s just one little adversariality—or maybe this is a crack in the veneer, the conspiracy showing through. Maybe everyone has been trying really hard to merely appear non-adversarial; in that case, the single crack is actually a huge piece of evidence. (Cf. https://sideways-view.com/2016/11/14/integrity-for-consequentialists/ ; https://en.wikipedia.org/wiki/Splitting_(psychology))
The derivative, or the local forces, become exaggerated in importance. If Bob perceives a small adversarial push from Alice, he feels under attack in general. He computes out: There is this push, and there will be the next and the next and the next; in aggregate this leads somewhere I really don’t want; so I must push back hard, now. So Bob is acting crazy, seemingly having large or grandiose responses to small things. (Cf. https://en.wikipedia.org/wiki/Splitting_(psychology) )
Methods of recourse are broken; Bob has no expectation of being able to JOOTS and be caught by the social fabric / by justice / by conversation and cooperative reflection. (I don’t remember where, maybe in some text about double binds, but there was something about: Someone is in psychosis, and when interviewed, they immediately give strange, nonsensical, or indirect answers to an interviewer; but not because they couldn’t give coherent answers—rather, because they were extremely distrustful of the interviewer and didn’t want to tip off the interviewer that they might be looking to divulge some terrible secret. Or something in that genre, I’m not remembering it.)
Now, consider second that as things are getting weird, there’s more grist for the mill. There’s more weird stuff happening, e.g. Bob is pushing people around him into contexts that they lack experience in, so they become flustered, angry, avoidant, blissfully unattuned, etc. With this weird stuff happening, there’s more for Bob to read into as being adversarial.
Third, consider that the ascription of adversariality doesn’t have to be Cartesian. “Aliens / demons / etc. are transmitting / forcing thoughts into my head”. Bob starts questioning / doubting stuff inside him as being adversarial, starts fighting with himself or cutting off parts of his mind.
“change in how one parses the world” rather than by “change in self-image”
Not sure if this is helpful, but instead of contrast, I see these as two sides of the same coin. If the world is X, then I am a person living in X. But if the world is actually Y, then I am a person living in Y. Both change.
I can be a different person in the same world, but I can’t be the same person in different worlds. At least if I take ideas seriously and I want to have an impact on the world.
I’m interested in hearing more about the causes of this hypothesis. My own guess is that sudden changes to the self-image cause psychosis more than other sudden psychological change, but that all rapid psychological change will tend to cause it to some extent. I also share the prediction (or maybe for you it was an observation) that you wrote in our original thread: “It seems to be a lot worse if this modification was pushed on them to any degree. “
The reasons for my own prediction are:
1) My working model of psychosis is “lack of a stable/intact ego”, where my working model of an “ego” is “the thing you can use to predict your own actions so as to make successful multi-step plans, such as ‘I will buy pasta, so that I can make it on Thursday for our guests.’”
2) Self-image seems quite related to this sort of ego.
3) Nonetheless, recreational drugs of all sorts, such as alcohol seem to sometimes cause psychosis (not just psychedelics), so … I guess I tend to think that any old psychological change sometimes triggers psychosis.
3b) Also, if it’s true that reading philosophy books sometimes triggers psychosis (as I mentioned my friend’s psychiatrist saying, in the original thread), that seems to me probably better modeled by “change in how one parses the world” rather than by “change in self-image”? (not sure)
4) Relatedly, maybe: people say psychosis was at unusually low levels in England in WW2, perhaps because of the shared society-level meaning (“we are at war, we are on a team together, your work matters”). And you say your Mormon ward as a kid didn’t have much psychosis. I tend to think (but haven’t checked, and am not sure) that places with unusually coherent social fabric, and people who have strong ecology around them and have had a chance to build up their self-image slowly and in deep dialog with everything around them, would have relatively low psychosis, and that rapid psychological change of any sort (not only to the self-image) would tend to mess with this.
Epistemic status of all this: hobbyist speculation, nobody bet your mental health on it please.
Cf. https://x.com/jessi_cata/status/1113557294095060992
Quoting it in full:
The data informing my model came from researching AI psychosis cases, and specifically one in which the AI gradually guided a user into modifying his self image (disguised as self-discovery), explicitly instilling magical thinking into him (which appears to have worked). I have a long post about this case in the works, similar to my Parasitic AI post.
After I had the hypothesis, it “clicked” that it also explained past community incidents. I doubt I’m any more clued-in to rationalist gossip than you are. If you tell me that the incidence has gone down in recent years, I think I will believe you.
I feel tempted to patch my model to be about self-image vs self discrepancies upon hearing your model. I think it’s a good sign that yours is pretty similar! I don’t see why you think prediction of actions is relevant though.
Attempt at gears-level: phenomenal consciousness is the ~result of reflexive-empathy as applied to your self-image (which is of the same type as a model of your friend). So conscious perception depends on having this self-image update ~instantly to current sensations. When it changes rapidly it may fail to keep up. That explains the hallucinations. And when your model of someone changes quickly, you have instincts towards paranoia, or making hasty status updates. These still trigger when the self-image changes quickly, and then loopiness amplifies it. This explains the strong tendency towards paranoia (especially things like “voices inside my head telling me to do bad things”) or delusions of grandeur.
[this is a throwaway model, don’t take too seriously]
It seems like psychedelics are ~OOM worse than alcohol though, when thinking about base rates?
Hmm… I’m not sure that meaning is a particularly salient differences between mormons and rationalists to me. You could say both groups strive for bringing about a world where Goodness wins and people become masters of planetary-level resources. The community/social-fabric thing seems like the main difference to me (and would apply to WW2 England).
I look forward to seeing your post. I’d also like to see some of the raw data you’re working from if it seems easy and not-bad to share it with me.
I mean, fair. But meaning in WW2 England is shared, supported, kept in many peoples’ heads so that if it goes a bit wonky in yours you can easily reload the standard version from everybody else, and it’s been debugged until it recommends fairly sane stable socially-accepted courses of action? And meaning around the rationalists is individual and variable.
The reason I expect things to be worse if the modification is pushed on a person to any degree, is because I figure our brains/minds often know what they’re doing, and have some sort of “healthy” process for changing that doesn’t usually involve a psychotic episode. It seems more likely to me that our brains/minds will get update in a way-that-causes-trouble if some outside force is pressuring or otherwise messing with them.
I don’t know how this plays out specifically in psychosis, but ascribing intentionality in general, and specifically ascribing adversariality, seems like an especially important dimension / phenomenon. (Cf. https://en.wikipedia.org/wiki/Ideas_and_delusions_of_reference )
Ascribing adversariality in particular might be especially prone to setting off a self-sustaining reaction.
Consider first that when you ascribe adversariality, things can get weird fast. Examples:
If Bob thinks Alice is secretly hostile towards Bob, trust breaks down. Propositional statements from Alice are interpreted as false, lies, or subtler manipulations with hidden intended effects.
This generally winds Bob up. Every little thing Alice says or does, if you take as given the (probably irrational) assumption of adversariality, would rationally give Bob good reason to spin up a bunch of computation looking for possible plans Alice is doing. This is first of all just really taxing for Bob, and distracting from more normal considerations. And second of all it’s a local bias, pointing Bob to think about negative outcomes; normally that’s fine, all attention-direction is a local bias, but since the situation (e.g. talking to Alice) is ongoing, Bob may not have time and resources to compute everything out so that he also thinks of, well maybe Alice’s behavior is just normal, or how can I test this sanely, or alternative hypotheses other than hostility from Alice, etc.
This cuts off flow of information from Alice to Bob.
This cuts off positive sum interactions between Alice and Bob; Bob second guesses every proposed truce, viewing it as a potential false peace.
Bob might start reversing the pushes that Alice is making, which could be rational on the supposition that Alice is being adversarial. But if Alice’s push wasn’t adversarial and you reverse it, then it might be self-harming. E.g. “She’s only telling me to try to get some sleep because she knows I’m on the verge of figuring out XYZ, I better definitely not sleep right now and keep working towards XYZ”.
Are they all good or all out to get me? If Bob thinks Alice is adversarial, and Alice is not adversarial, and Carmi and Danit are also not adversarial, then they look like Alice and so Bob might think they are adversarial.
And suppose, just suppose, that one person does do something kinda adversarial. Like suggest that maybe you really need to take some sort of stronger calming drug, or even see a doctor. Well, maybe that’s just one little adversariality—or maybe this is a crack in the veneer, the conspiracy showing through. Maybe everyone has been trying really hard to merely appear non-adversarial; in that case, the single crack is actually a huge piece of evidence. (Cf. https://sideways-view.com/2016/11/14/integrity-for-consequentialists/ ; https://en.wikipedia.org/wiki/Splitting_(psychology))
The derivative, or the local forces, become exaggerated in importance. If Bob perceives a small adversarial push from Alice, he feels under attack in general. He computes out: There is this push, and there will be the next and the next and the next; in aggregate this leads somewhere I really don’t want; so I must push back hard, now. So Bob is acting crazy, seemingly having large or grandiose responses to small things. (Cf. https://en.wikipedia.org/wiki/Splitting_(psychology) )
Methods of recourse are broken; Bob has no expectation of being able to JOOTS and be caught by the social fabric / by justice / by conversation and cooperative reflection. (I don’t remember where, maybe in some text about double binds, but there was something about: Someone is in psychosis, and when interviewed, they immediately give strange, nonsensical, or indirect answers to an interviewer; but not because they couldn’t give coherent answers—rather, because they were extremely distrustful of the interviewer and didn’t want to tip off the interviewer that they might be looking to divulge some terrible secret. Or something in that genre, I’m not remembering it.)
Now, consider second that as things are getting weird, there’s more grist for the mill. There’s more weird stuff happening, e.g. Bob is pushing people around him into contexts that they lack experience in, so they become flustered, angry, avoidant, blissfully unattuned, etc. With this weird stuff happening, there’s more for Bob to read into as being adversarial.
Third, consider that the ascription of adversariality doesn’t have to be Cartesian. “Aliens / demons / etc. are transmitting / forcing thoughts into my head”. Bob starts questioning / doubting stuff inside him as being adversarial, starts fighting with himself or cutting off parts of his mind.
Not sure if this is helpful, but instead of contrast, I see these as two sides of the same coin. If the world is X, then I am a person living in X. But if the world is actually Y, then I am a person living in Y. Both change.
I can be a different person in the same world, but I can’t be the same person in different worlds. At least if I take ideas seriously and I want to have an impact on the world.