Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts, are classic psychotic themes.
And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
sometimes the truth really has been hidden!
sometimes people really are lying to you or trying to manipulate you!
sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
of course people influence each others’ thoughts—not through telepathy but through communication!
a false psychotic-flavored thought is “they put a chip in my brain that controls my thoughts.” a true psychotic-flavored thought is “Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies.”
These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of “anxiety that one’s own thoughts are externally influenced”, they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
but if you take the content literally, then clearly one claim is true and one is false.
and a sufficiently smart/self-aware person will feel the “anxiety-about-mental-influence” experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than “they put a mind-control chip in my brain”, but is fundamentally coming from the same motive.
There’s an analogous but easier to recognize thing with depression.
A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
When a depressive literally believes they are already dead, we call that Cotard’s Delusion, a severe form of psychotic depression. When they say “everybody hates me” we call it a mere “distorted thought”. When they talk accurately about the heat death of the universe we call it “thermodynamics.” But it’s all coming from the same emotional place.
In general, mental illnesses, and mental states generally, provide a “tropism” towards thoughts that fit with certain emotional/aesthetic vibes.
Depression makes you dwell on thoughts of futility and despair
Anxiety makes you dwell on thoughts of things that can go wrong
Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you’re currently doing
Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
You can, to some extent, “filter” your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional “vibe” you’re disposed to gravitate towards; but maybe you don’t let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don’t fit the vibe but technically seem correct.
this does not mean that the underlying “tropism” or “bias” does not exist!!!
this does not mean that you believe things “only because they are true”!
in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
the “bottom line” in terms of vibe has already been written, so it conveys no “updates” about the world
the “bottom line” in terms of details may still be informative because you’re checking that part and it’s flexible
“He’s not wrong but he’s still crazy” is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
he might still be right about some details but you shouldn’t update too far in the direction of “maybe I should be gloomy about this too”
Conversely, “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it).
Just because someone has a mental illness doesn’t mean every word out of their mouth is false!
(and of course this assumption—that “crazy” people never tell the truth—drives a lot of psychiatric abuse.)
I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients’ ears, because: 1. Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device) 2. Doctors usually just say “you are wrong, there’s nothing in your ear” without looking 3. This destroys trust, so he started doing cursory checks with an ear scope 4. Far more often than he expected (I forget exactly, but something like 10-20%ish), there actually was something in the person’s ear—usually just earwax buildup, but occasionally something else like a dead insect—that was indeed causing the sensation, and he gained a clinical pathway to addressing his patients’ discomfort that he had previously lacked
It’s pretty far from meeting dath ilan’s standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone’s ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.
This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.
Schizophrenia is the archetypal definitely-biological mental disorder, but recently for reasons relevant to the above, I’ve been wondering if that is wrong/confused. Here’s my alternate (admittedly kinda uninformed) model:
Psychosis is a biological state or neural attractor, which we can kind of symptomatically characterize, but which really can only be understood at a reductionistic level.
One of the symptoms/consequences of psychosis is getting extreme ideas at extreme amounts of intensity.
This symptom/consequence then triggers a variety of social dynamics that give classic schizophrenic-like symptoms such as, as you say, “preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts”
That is, if you suddenly get an extreme idea (e.g. that the fly that flapped past you is a sign from god that you should abandon your current life), you would expect dynamics like:
People get concerned for you and try to dissuade you, likely even conspiring in private to do so (and even if they’re not conspiring, it can seem like a conspiracy). In response, it might seem appropriate to distrust them.
Or, if one interprets it as them just lacking the relevant information, one needs to develop some theory of why one has access to special information that they don’t.
Or, if one is sympathetic to their concern, it would be logical to worry about one’s thoughts getting influenced.
But these sorts of dynamics can totally be triggered by extreme beliefs without psychosis! This might also be related to how Enneagram type 5 (the rationalist type) is especially prone to schizophrenia-like symptoms.
(When I think “in a psychotic way”, I think of the neurological disorder, but it seems like the way you use it in your comment is more like the schizophrenia-like social dynamic?)
In general, mental illnesses, and mental states generally, provide a “tropism” towards thoughts that fit with certain emotional/aesthetic vibes.
Depression makes you dwell on thoughts of futility and despair
Anxiety makes you dwell on thoughts of things that can go wrong
Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you’re currently doing
Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
Also tangential, this is sort of a “general factor” model of mental states. That often seems applicable, but recently my default interpretation of factor models has been that they tend to get at intermediary variables and not root causes.
Let’s take an analogy with computer programs. If you look at the correlations in which sorts of processes run fast or slow, you might find a broad swathe of processes whose performance is highly correlated, because they are all predictably CPU-bound. However, when these processes are running slow, there will usually be some particular program that is exhausting the CPU and preventing the others from running. This problematic program can vary massively from computer to computer, so it is hard to predict or model in general, but often easy to identify in the particular case by looking at which program is most extreme.
One has to be a bit careful with this though. E.g. someone experiencing or having experienced harassment may have a seemingly pathological obsession on the circumstances and people involved in the situation, but it may be completely proportional to the way that it affected them—it only seems pathological to people who didn’t encounter the same issues.
For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn’t easily tell how true they were… which I suppose I do think is the correct place to be paying attention to?
Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.
Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).
Thank you, this is interesting and important. I worry that it overstates similarity of different points on a spectrum, though.
in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
In a certain sense, yes. In other, critical senses, no. This is a case where quantitative differences are big enough to be qualitative. When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas. Among them, the inability to shut up about it when it’s not relevant, and the large negative impact on relationships and daily life. For many many purposes, “hiding it better” is the distinction that matters.
I fully agree that “He’s not wrong but he’s still crazy” is valid (though I’d usually use less-direct phrasing). It’s pretty rare that “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” happens to me, but it’s definitely not never.
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me. And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”
I think this prompts some kind of directional update in me. My paraphrase of this is:
it’s actually pretty ridiculous to think you can steer the future
It’s also pretty ridiculous to choose to identify with what the future is likely to be.
Therefore…. Well, you don’t spell out your answer. My answer is “I should have a personal meaning-making resolution to ‘what would I do if those two things are both true,’ even if one of them turns out to be false, so that I can think clearly about whether they are true.”
I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss’, which does feel like a notably different thing.
So it seems worth doing some thinking and pre-grieving about that.
I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.
I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.
Either things have improved in the past or they haven’t, and either people trying to “steer the future” in some sense have been influential on these improvements. I think things have improved, and I think there’s definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.
Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
How does “this is so futile” square with the massive success of taxes and criminal justice? From what I’ve heard, states have managed to reduce murder rates by 50x. Obviously that’s stopping people from something violent rather than non-violent, but what’s the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?
The key trick to make it correct to try to control people or stop them is to be stronger than them.
I honestly feel that the only appropriate response is something along the lines of “fuck defeatism”[1].
This comment isn’t targeted at you, but at a particular attractor in thought space.
Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.
I think it’s mostly that I don’t think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I’d go with :-).
Don’t get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But “never start” and “don’t even try” are completely different.
It’s also worth noting, that saving the world is a team sport. It’s okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
Proposal: For any given system, there’s a destiny based on what happens when it’s developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.
“I love whatever is the destiny” is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.
Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.
1) Regarding tiling the universy with computronium as destiny is Gnostic heresy.
2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah’s “this is so futile” point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.
More generally, I have a concept I call the “infinite world approximation”, which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.
Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts, are classic psychotic themes.
And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
sometimes the truth really has been hidden!
sometimes people really are lying to you or trying to manipulate you!
sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
of course people influence each others’ thoughts—not through telepathy but through communication!
a false psychotic-flavored thought is “they put a chip in my brain that controls my thoughts.” a true psychotic-flavored thought is “Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies.”
These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of “anxiety that one’s own thoughts are externally influenced”, they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
but if you take the content literally, then clearly one claim is true and one is false.
and a sufficiently smart/self-aware person will feel the “anxiety-about-mental-influence” experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than “they put a mind-control chip in my brain”, but is fundamentally coming from the same motive.
There’s an analogous but easier to recognize thing with depression.
A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
When a depressive literally believes they are already dead, we call that Cotard’s Delusion, a severe form of psychotic depression. When they say “everybody hates me” we call it a mere “distorted thought”. When they talk accurately about the heat death of the universe we call it “thermodynamics.” But it’s all coming from the same emotional place.
In general, mental illnesses, and mental states generally, provide a “tropism” towards thoughts that fit with certain emotional/aesthetic vibes.
Depression makes you dwell on thoughts of futility and despair
Anxiety makes you dwell on thoughts of things that can go wrong
Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you’re currently doing
Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
You can, to some extent, “filter” your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional “vibe” you’re disposed to gravitate towards; but maybe you don’t let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don’t fit the vibe but technically seem correct.
this does not mean that the underlying “tropism” or “bias” does not exist!!!
this does not mean that you believe things “only because they are true”!
in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
the “bottom line” in terms of vibe has already been written, so it conveys no “updates” about the world
the “bottom line” in terms of details may still be informative because you’re checking that part and it’s flexible
“He’s not wrong but he’s still crazy” is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
he might still be right about some details but you shouldn’t update too far in the direction of “maybe I should be gloomy about this too”
Conversely, “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it).
Just because someone has a mental illness doesn’t mean every word out of their mouth is false!
(and of course this assumption—that “crazy” people never tell the truth—drives a lot of psychiatric abuse.)
link: https://roamresearch.com/#/app/srcpublic/page/71kfTFGmK
I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients’ ears, because:
1. Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device)
2. Doctors usually just say “you are wrong, there’s nothing in your ear” without looking
3. This destroys trust, so he started doing cursory checks with an ear scope
4. Far more often than he expected (I forget exactly, but something like 10-20%ish), there actually was something in the person’s ear—usually just earwax buildup, but occasionally something else like a dead insect—that was indeed causing the sensation, and he gained a clinical pathway to addressing his patients’ discomfort that he had previously lacked
This reminds me of dath ilan’s hallucination diagnosis from page 38 of Yudkowsky and Alicorn’s glowfic But Hurting People Is Wrong.
It’s pretty far from meeting dath ilan’s standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone’s ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.
This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.
Tangential, but...
Schizophrenia is the archetypal definitely-biological mental disorder, but recently for reasons relevant to the above, I’ve been wondering if that is wrong/confused. Here’s my alternate (admittedly kinda uninformed) model:
Psychosis is a biological state or neural attractor, which we can kind of symptomatically characterize, but which really can only be understood at a reductionistic level.
One of the symptoms/consequences of psychosis is getting extreme ideas at extreme amounts of intensity.
This symptom/consequence then triggers a variety of social dynamics that give classic schizophrenic-like symptoms such as, as you say, “preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts”
That is, if you suddenly get an extreme idea (e.g. that the fly that flapped past you is a sign from god that you should abandon your current life), you would expect dynamics like:
People get concerned for you and try to dissuade you, likely even conspiring in private to do so (and even if they’re not conspiring, it can seem like a conspiracy). In response, it might seem appropriate to distrust them.
Or, if one interprets it as them just lacking the relevant information, one needs to develop some theory of why one has access to special information that they don’t.
Or, if one is sympathetic to their concern, it would be logical to worry about one’s thoughts getting influenced.
But these sorts of dynamics can totally be triggered by extreme beliefs without psychosis! This might also be related to how Enneagram type 5 (the rationalist type) is especially prone to schizophrenia-like symptoms.
(When I think “in a psychotic way”, I think of the neurological disorder, but it seems like the way you use it in your comment is more like the schizophrenia-like social dynamic?)
Also tangential, this is sort of a “general factor” model of mental states. That often seems applicable, but recently my default interpretation of factor models has been that they tend to get at intermediary variables and not root causes.
Let’s take an analogy with computer programs. If you look at the correlations in which sorts of processes run fast or slow, you might find a broad swathe of processes whose performance is highly correlated, because they are all predictably CPU-bound. However, when these processes are running slow, there will usually be some particular program that is exhausting the CPU and preventing the others from running. This problematic program can vary massively from computer to computer, so it is hard to predict or model in general, but often easy to identify in the particular case by looking at which program is most extreme.
One has to be a bit careful with this though. E.g. someone experiencing or having experienced harassment may have a seemingly pathological obsession on the circumstances and people involved in the situation, but it may be completely proportional to the way that it affected them—it only seems pathological to people who didn’t encounter the same issues.
I imagine they were obsessed with false versions of this idea, rather than obsession about targeted advertising?
no! it sounded like “typical delusion stuff” at first until i listened carefully and yep that was a description of targeted ads.
For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn’t easily tell how true they were… which I suppose I do think is the correct place to be paying attention to?
Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.
Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).
Thank you, this is interesting and important. I worry that it overstates similarity of different points on a spectrum, though.
In a certain sense, yes. In other, critical senses, no. This is a case where quantitative differences are big enough to be qualitative. When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas. Among them, the inability to shut up about it when it’s not relevant, and the large negative impact on relationships and daily life. For many many purposes, “hiding it better” is the distinction that matters.
I fully agree that “He’s not wrong but he’s still crazy” is valid (though I’d usually use less-direct phrasing). It’s pretty rare that “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” happens to me, but it’s definitely not never.
“we” can’t steer the future.
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me. And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”
Link to this on my Roam
I think this prompts some kind of directional update in me. My paraphrase of this is:
it’s actually pretty ridiculous to think you can steer the future
It’s also pretty ridiculous to choose to identify with what the future is likely to be.
Therefore…. Well, you don’t spell out your answer. My answer is “I should have a personal meaning-making resolution to ‘what would I do if those two things are both true,’ even if one of them turns out to be false, so that I can think clearly about whether they are true.”
I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss’, which does feel like a notably different thing.
So it seems worth doing some thinking and pre-grieving about that.
I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.
I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.
Either things have improved in the past or they haven’t, and either people trying to “steer the future” in some sense have been influential on these improvements. I think things have improved, and I think there’s definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.
Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!
How does “this is so futile” square with the massive success of taxes and criminal justice? From what I’ve heard, states have managed to reduce murder rates by 50x. Obviously that’s stopping people from something violent rather than non-violent, but what’s the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?
The key trick to make it correct to try to control people or stop them is to be stronger than them.
I honestly feel that the only appropriate response is something along the lines of “fuck defeatism”[1].
This comment isn’t targeted at you, but at a particular attractor in thought space.
Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.
I think it’s mostly that I don’t think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I’d go with :-).
Don’t get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But “never start” and “don’t even try” are completely different.
It’s also worth noting, that saving the world is a team sport. It’s okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.
I would also suggest that this is the best way to respond to depression rather than “trying to argue your way out of it”.
Is it too much to declare this the manifesto of a new philosophical school, Constantinism?
wait and see if i still believe it tomorrow!
Proposal: For any given system, there’s a destiny based on what happens when it’s developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.
“I love whatever is the destiny” is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.
Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.
People who love solarpunk don’t obviously love computronium dyson spheres tho
That is true, though:
1) Regarding tiling the universy with computronium as destiny is Gnostic heresy.
2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah’s “this is so futile” point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.
More generally, I have a concept I call the “infinite world approximation”, which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.
links 10/8/24 https://roamresearch.com/#/app/srcpublic/page/10-08-2024
links 10/1/24
https://roamresearch.com/#/app/srcpublic/page/10-01-2024
links 10/9/24 https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t
links 8/7/2024
https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t
links 10/4/2024
https://roamresearch.com/#/app/srcpublic/page/10-04-2024
links 10/2/2024:
https://roamresearch.com/#/app/srcpublic/page/10-02-2024