I have this weird sense that there’s an opportunity for some kind of “It Gets Better thing” for young philosophers
We would need to identify the sort of things that can go wrong. For example, I can identify two types of philosophic horror at the world (there might be more). One is where the world seems to have become objectively horrifying, and you can’t escape from this perception, or don’t want to escape from it because you believe this would require the sacrifice of your reason, values, or personality. A complementary type is where you believe the world could become infinitely better, if only everyone did X, but you’re the only one who wants to do X, no-one else will support you, and in fact they try to talk you out of your ideas.
Example of the first: I know someone who believes in Many Worlds and is about to kill himself unless he can prove to himself that the worlds are “diverging” (in the jargon of Alastair Wilson) rather than “splitting”. “Diverging worlds” are each self-contained, like in a single-world theory, but they can track each other for a time (i.e. the history of one will match the history of the other up to a point). “Splitting worlds” are self-explanatory—worlds that start as one and branch into many. What’s so bad about the splitting worlds, he says, is that the people in this world, that you know and care about, are the ones who experience all possible outcomes, who get murdered by you in branches where you spontaneously become a killer (and add every bad thing you can think of, and can’t, to the list of what happens to them). Also, distinct from this, human existence is somehow rendered meaningless because everything always happens. (I think the meaninglessness has to do with the inability to make a difference or produce outcomes, and not just the inconceivability of all possibilities being real.) In the self-contained “diverging worlds”, the people you know just have one fate—their copies in the other worlds are different people—and you’re saved from the horror and nihilism of the branching worlds.
Example of the second: recent LW visitor “Singularity_Utopia”, who on the one hand says that an infinite perfect future of immortality and superintelligence is coming as soon as 2045, and we don’t even need to work on friendliness, just focus on increasing intelligence, and that meanwhile the world could start becoming better right now if everyone embraced the knowledge of imminent “post-scarcity”… but who at the same time says on his website that his life is a living hell. I think that without a doubt this is someone whose suffering is intimately linked with the fact that they have a message of universal joy that no-one is listening to.
Now if someone proposes to be a freelance philosophical Hippocrates, they have their work cut out for them. The “victims” of these mental states tend to be very intelligent and strong-willed. Example number one thinks you could only be a psychopath to want to live in that sort of universe, so he doesn’t want to solve his problem by changing his attitude towards splitting worlds; the only positive solution would be to discover that this ontology is objectively unlikely. Example number two is trying to save the world by living his life this way, so I suppose it seems supremely important to keep it up. He might be even less likely to change his ways.
He’s still alive, but medicated and still miserable; by his account, only able to think for a few hours each day. MWI is his personal basilisk. For a while last year, he was excited when the Nobelist Gerard ’t Hooft was proposing to get quantum field theory from cellular automata, but that was only for very special QFTs, and no-one else has built on those papers so far. Right now he’s down because everyone he asks thinks David Wallace (Oxford exponent of MWI) is brilliant. I originally heard from him because of my skepticism about MWI, expressed many times on this site.
It took me 3 months to realize that I completely failed to inquire about your second friend. I must have seen him as having the lesser problem and dismissed it out of hand, without realizing that acknowledging the perceived ease of a problem isn’t the same as actually solving it, like putting off easy homework.
He isn’t my friend, he’s just some guy who decided to be a singularity cheerleader. But his website is still the same—super-AI is inherently good and can’t come soon enough, scarcity is the cause of most problems and abundance is coming and will fix it, life in the pre-singularity world is tragic and boring and bearable only because the future will be infinitely better.
We would need to identify the sort of things that can go wrong. For example, I can identify two types of philosophic horror at the world (there might be more). One is where the world seems to have become objectively horrifying, and you can’t escape from this perception, or don’t want to escape from it because you believe this would require the sacrifice of your reason, values, or personality. A complementary type is where you believe the world could become infinitely better, if only everyone did X, but you’re the only one who wants to do X, no-one else will support you, and in fact they try to talk you out of your ideas.
Example of the first: I know someone who believes in Many Worlds and is about to kill himself unless he can prove to himself that the worlds are “diverging” (in the jargon of Alastair Wilson) rather than “splitting”. “Diverging worlds” are each self-contained, like in a single-world theory, but they can track each other for a time (i.e. the history of one will match the history of the other up to a point). “Splitting worlds” are self-explanatory—worlds that start as one and branch into many. What’s so bad about the splitting worlds, he says, is that the people in this world, that you know and care about, are the ones who experience all possible outcomes, who get murdered by you in branches where you spontaneously become a killer (and add every bad thing you can think of, and can’t, to the list of what happens to them). Also, distinct from this, human existence is somehow rendered meaningless because everything always happens. (I think the meaninglessness has to do with the inability to make a difference or produce outcomes, and not just the inconceivability of all possibilities being real.) In the self-contained “diverging worlds”, the people you know just have one fate—their copies in the other worlds are different people—and you’re saved from the horror and nihilism of the branching worlds.
Example of the second: recent LW visitor “Singularity_Utopia”, who on the one hand says that an infinite perfect future of immortality and superintelligence is coming as soon as 2045, and we don’t even need to work on friendliness, just focus on increasing intelligence, and that meanwhile the world could start becoming better right now if everyone embraced the knowledge of imminent “post-scarcity”… but who at the same time says on his website that his life is a living hell. I think that without a doubt this is someone whose suffering is intimately linked with the fact that they have a message of universal joy that no-one is listening to.
Now if someone proposes to be a freelance philosophical Hippocrates, they have their work cut out for them. The “victims” of these mental states tend to be very intelligent and strong-willed. Example number one thinks you could only be a psychopath to want to live in that sort of universe, so he doesn’t want to solve his problem by changing his attitude towards splitting worlds; the only positive solution would be to discover that this ontology is objectively unlikely. Example number two is trying to save the world by living his life this way, so I suppose it seems supremely important to keep it up. He might be even less likely to change his ways.
How did your first friend turn out?
He’s still alive, but medicated and still miserable; by his account, only able to think for a few hours each day. MWI is his personal basilisk. For a while last year, he was excited when the Nobelist Gerard ’t Hooft was proposing to get quantum field theory from cellular automata, but that was only for very special QFTs, and no-one else has built on those papers so far. Right now he’s down because everyone he asks thinks David Wallace (Oxford exponent of MWI) is brilliant. I originally heard from him because of my skepticism about MWI, expressed many times on this site.
Is he still on Less Wrong?
Not really (though I told him about this thread). He spends his time corresponding directly with physicists and philosophers.
Any way for me to contact him?
(Taken to PM.)
Hang on, didn’t Everett believe that in the event of death, his consciousness would just follow a stream of events that lead to his not being dead?
Maybe consider introducing him to instrumentalism. Worrying to death about untestables is kind of sad.
It took me 3 months to realize that I completely failed to inquire about your second friend. I must have seen him as having the lesser problem and dismissed it out of hand, without realizing that acknowledging the perceived ease of a problem isn’t the same as actually solving it, like putting off easy homework.
How is your second friend turning out?
He isn’t my friend, he’s just some guy who decided to be a singularity cheerleader. But his website is still the same—super-AI is inherently good and can’t come soon enough, scarcity is the cause of most problems and abundance is coming and will fix it, life in the pre-singularity world is tragic and boring and bearable only because the future will be infinitely better.