There’s important deontology about not unilaterally risking other people’s lives, but this mostly goes away in the case of risking your own life.
I don’t think so, I agree we shouldn’t have laws around this, but insofar as we have deontologies to correct for circumstances where historically our naive utility maximizing calculations have been consistently biased, I think there have been enough cases of people uselessly martyring themselves for their causes to justify a deontological rule not to sacrifice your own actual life.
Edit: Basically, I don’t want suicidal people to back-justify batshit insane reasons why they should die to decrease x-risk instead of getting help. And I expect these are the only people who would actually be at risk for a plan which ends with “and then I die, and there is 1% increased probability everyone else gets the good ending”.
At the time, South Vietnam was led by President Ngo Dinh Diem, a devout Catholic who had taken power in 1955, and then instigated oppressive actions against the Buddhist majority population of South Vietnam. This began with measures like filling civil service and army posts with Catholics, and giving them preferential treatment on loans, land distribution, and taxes. Over time, Diem escalated his measures, and in 1963 he banned flying the Buddhist flag during Vesak, the festival in honour of the Buddha’s birthday. On May 8, during Vesak celebrations, government forces opened fire on unarmed Buddhists who were protesting the ban, killing nine people, including two children, and injured many more.
[...]
Unfortunately, standard measures for negotiation – petitions, street fasting, protests, and demands for concessions – were ignored by the Diem government, or met with force, as in the Vesak shooting.
[...]
Since conventional measures were failing, the Inter-Sect Committee decided to consider more extreme measures, including the idea of a voluntary self-immolation. While extreme, they hoped it would create an international media incident, to draw attention to the suffering of Buddhists in South Vietnam. They noted in their meeting minutes the power of photographs to focus international attention: “one body can reach where ten thousand leaflets cannot.” It was to be a Bodhisattva deed to help awaken the world.
[...]
On June 10, the Inter-Sect Committee contacted at least four Saigon-based members of the international media, telling them to be present for a “major event” that would occur the next morning. One of them was a photographer from the Associated Press, Malcolm Browne, who said he had “no idea” what he’d see, beyond expecting some kind of protest. When Thich Quang Duc and his attendants exited the car, Browne was 15 meters away, just outside the ring of chanting monks. Browne took more than 100 photos, fighting off nausea from the smell of burning gasoline and human flesh, and struggling with the horror, as he created a permanent visual record of Thich Quang Duc’s sacrifice.
The sacrifice was not in vain. The next day, Browne’s photos made the front page of newspapers around the world. They shocked people everywhere, and galvanized mass protests in South Vietnam. US President John F. Kennedy reportedly exclaimed “Jesus Christ!” upon first seeing the photo. The US government, which had been instrumental in installing and supporting the anti-communist Diem, withdrew its support, and just a few months later supported a coup that led to Diem’s death, a change in government, and the end of anti-Buddhist policy2.
Nielsen also includes unsuccessful or actively repugnant examples of it.
The sociologist Michael Biggs6 has identified more than 500 self-immolations as protest in the four decades after Thich Quang Duc, most or all of which appear to have been inspired in part by Thich Quang Duc.
I’ve discussed Thich Quang Duc’s sacrifice in tacitly positive terms. But I don’t want to uncritically venerate this kind of sacrifice. As with Kravinsky’s kidney donation, while it had admirable qualities, it also had many downsides, and the value may be contested. Among the 500 self-immolations identified by Biggs, many seem pointless, even evil. For example: more than 200 people in India self-immolated in protest over government plans to reserve university places for lower castes. This doesn’t seem like self-sacrifice in service of a greater good. Rather, it seems likely many of these people lacked meaning in their own lives, and confused the grand gesture of the sacrifice for true meaning. Moral invention is often difficult to judge, in part because it hinges on redefining our relationship to the rest of the universe.
I also think this paragraph about Quang Duc is quite relevant:
Quang Duc was not depressed nor suicidal. He was active in his community, and well respected. Another monk, Thich Nhat Hanh, who had lived with him for the prior year, wrote that Thich Quang Duc was “a very kind and lucid person… calm and in full possession of his mental faculties when he burned himself.” Nor was he isolated and acting alone or impulsively. As we’ll see, the decision was one he made carefully, with the blessing of and as part of his community.
I’m not certain if there’s a particular point you want me to take away from this, but thanks for the information, and including an unbiased sample from the article you linked. I don’t think I changed my mind so much from reading this though.
Do you also believe there is a deontological rule against suicide? I have heard rumor that most people who attempt suicide and fail, regret it. At the same time, I think some lives are worse than death (for example, see Amanda Luce’s Book Review: Two Arms And A Head that won the ACX book review prize), and so I believe it should be legal and sometimes supported, even if it were the case that most attempted suicides have been regretted.
I have heard rumor that most people who attempt suicide and fail, regret it.
After doing some research on this, I think this is unlikely to be true. The only quantitative study I found says that among its sample of suicide attempt survivors, 35.6% are glad to have survived, while 42.7% feel ambivalent, and 21.6% regret having survived. I also found a couple of sources agreeing with your “rumor”, but one cited just a suicide awareness trainer as its source, while the other cited the above study as the only evidence for its claim, somehow interpreting it as “Previous research has found that more than half of suicidal attempters regret their suicidal actions.” (Gemini 2.5 Pro says “It appears the authors of the 2023 paper misinterpreted or misremembered the findings of the 2005 study they cited.”)
If this “rumor” was true, I would expect to see a lot of studies supporting it, because such studies are easy to do and the result would be highly useful for people trying to prevent suicides (i.e., they can use it to convince potential suicide attempters that they’re likely to regret it). Evidence to the contrary are likely to be suppressed or not gathered in the first place, as almost nobody wants to encourage suicides. (The above study gathered the data incidentally, for a different purpose.) So everything seems consistent with the “rumor” being false.
Interesting, thanks. I think I had heard the rumor before and believed it.
In the linked study, it looks like they asked the people about regret very shortly after the suicide attempt. This could both bias the results towards less regret to have survived (little time to change their mind) or more regret to have survived (people might be scared to signal intent to retry suicide, for fear of being committed, which I think sometimes happens soon after failed attempts).
I think very very many people are not making an informed decision when they decide to commit suicide.
For example, I think quantum immortality is quite plausibly a thing. Very few people know about quantum immortality and even fewer have seriously thought about it. This means that almost everyone on the planet might have a very mistaken model of what suicide actually does to their anticipated experience.[1] Also, many people are religious and believe in a pleasant afterlife. Many people considering suicide are mentally ill in a way that compromises their decision making. Many people think transhumanism is impossible and won’t arrange for their brain to be frozen for that reason.
I agree that there is some threshold on the fraction of ill-considered suicides relative to total suicides such that suicide should be legal if we were below that threshold. I used to think we were maybe below that threshold. After I began studying physics at uni and so started taking quantum immortality more seriously, I switched to thinking we are maybe above the threshold.
You might find yourself in a branch where your suicide attempt failed, but a lot of your body and mind were still destroyed. If you keep exponentially decreasing the amplitude of your anticipated future experience in the universal wave function further, you might eventually find that it is now dominated by contributions from weird places and branches far-off in spacetime or configuration space that were formerly negligible, like aliens simulating you for some negotiation or other purpose.
I don’t really know yet how to reason well about what exactly the most likely observed outcome would be here. I do expect that by default, without understanding and careful engineering our civilisation doesn’t remotely have the capability for yet, it’d tend to be very Not Good.
This all feels galaxy-brained to me and like it proves too much. By analogy I feel like if you thought about population ethics for a while and came to counterintuitive conclusions, you might argue that people who haven’t done that shouldn’t be allowed to have children; or if they haven’t thought about timeless decision theory for a while they aren’t allowed to get a carry license.
I don’t think it proves too much. Informed decision-making comes in degrees, and some domains are just harder? Like, I think my threshold for leaving people free to make their own mistakes if they are the only ones harmed by them is very low, compared to where the human population average seems to be at the moment. But my threshold is, in fact, greater than zero.
For example, there are a bunch of things I think bystanders should generally prevent four year old human children from doing, even if the children insist that they want to do them. I know that stopping four year old children from doing these things will be detrimental in some cases, and that having such policies is degrading to the childrens’ agency. I remember what it was like being four years old and feeling miserable because of kindergarten teachers who controlled my day and thought they knew what was best for me. I still think the tradeoff is worth it on net in some cases.
I just think that the suicide thing happens to be a case where doing informed decision-making is maybe just too tough for way too many humans and thus some form of ban could plausibly be worth it on net. Sports betting is another case where I was eventually convinced that maybe a legal ban of some form could be worth it.
(I agree with Lucious in that I think it is important that people have the option of getting cryopreserved and also are aware of all the reality-fluid stuff before they decide to kill themselves.)
“Important” is ambiguous, in that I agree it matters, but it does for this civilization to ban whole life options from people until they have heard about niche philosophy. Most people will never hear about niche philosophy.
I don’t think quantum immortality changes anything. You can rephrame this in terms of standard probability theory and condition on them continuing to have subjective experience, and still get to the same calculus.
However, only considering the branches in which you survive, or conditioning on having subjective experience after the suicide attempt, ignores the counterfactual suffering prevented in all the branches (or probability mass) in which you did die, which may be less unpleasant than the branches in which you survived, but are many many more in number! Ignoring those branches biases the reasoning toward rare survival tails that don’t dominate the actual expected utility.
I don’t think quantum immortality changes anything. You can rephrame this in terms of standard probability theory and condition on them continuing to have subjective experience, and still get to the same calculus.
I agree that quantum mechanics is not really central for this on a philosophical level. You get a pretty similar dynamic just from having a universe that is large enough to contain many almost-identical copies of you. It’s just that it seems at present very unclear and arguable whether the physical universe is in fact anywhere near that large, whereas I would claim that a universal wavefunction which constantly decoheres into different branches containing different versions of us is pretty strongly implied to be a thing by the laws of physics as we currently understand them.
However, only considering the branches in which you survive, or conditioning on having subjective experience after the suicide attempt, ignores the counterfactual suffering prevented in all the branches (or probability mass) in which you did die, which may be less unpleasant than the branches in which you survived, but are many many more in number! Ignoring those branches biases the reasoning toward rare survival tails that don’t dominate the actual expected utility.
It is very late here and I should really sleep instead of discussing this, so I won’t be able to reply as in-depth as this probably merits. But, basically, I would claim that this is not the right way to do expected utility calculations when it comes to ensembles of identical or almost-identical minds.
A series of thought experiments might maybe help illustrate part of where my position comes from:
Imagine someone tells you that they will put you to sleep and then make two copies of you, identical down to the molecular level. They will place you in a room with blue walls. They will place one copy of you in a room with red walls, and the other copy in another room with blue walls. Then they will wake all three of you up.
What color do you anticipate seeing after you wake up, and with what probability?
I’d say 2⁄3 blue, 1⁄3 red. Because there will now be three versions of me, and until I look at the walls I won’t know which one I am.
Imagine someone tells you that they will put you to sleep and then make two copies of you. One copy will not include a brain. It’s just a dead body with an empty skull. Another copy will be identical to you down to the molecular level. Then they will place you in a room with blue walls, and the living copy in a room with red walls. Then they will wake you and the living copy up.
What color do you anticipate seeing after you wake up, and with what probability? Is there a 1⁄3 probability that you ‘die’ and don’t experience waking up because you might end up ‘being’ the corpse-copy?
I’d say 1⁄2 blue, 1⁄2 red, and there is clearly no probability of me ‘dying’ and not experiencing waking up. It’s just a bunch of biomass that happens to be shaped like me.
As 2, but instead of creating the corpse-copy without a brain, it is created fully intact, then its brain is destroyed while it is still unconscious. Should that change our anticipated experience? Do we now have a 1⁄3 chance of dying in the sense that we might not experience waking up? Is there some other relevant sense in which we die, even if it does not affect our anticipated experience?
I’d say no and no. This scenario is identical to 2 in terms of the relevant information processing that is actually occurring. The corpse-copy will have a brain, but it will never get to use it, so it won’t affect my expected anticipated experience in any way. Adding more dead copies doesn’t change my anticipated experience either. My best scoring prediction will be that I have 1⁄2 chance of waking up to see red walls, and 1⁄2 chance of waking up to see blue walls.
In real life, if you die in the vast majority of branches caused by some event, i.e. that’s where the majority of the amplitude is, but you survive in some, the calculation for your anticipated experience would seem to not include the branches where you die for the same reason it doesn’t include the dead copies in thought experiments 2 and 3.
(I think Eliezer may have written about this somewhere as well using pretty similar arguments, maybe in the quantum physics sequence, but I can’t find it right now.)
You get a pretty similar dynamic just from having a universe that is large enough to contain many almost-identical copies of you.
Again, not sure why a large universe is needed. The expected utility ends up the same either way, whether you have some fraction of branches in which you remain alive or some probability of remaining alive.
Regarding the expected utility calculus. I agree with everything you said but i don’t see how any of it allows you to disregard the counterfactual suffering from not committing suicide in your expected value calculation.
Maybe the crux is whether we consider the utility of each “you” (i.e. you in each branch) individually, and add it up for the total utility, or wether we consider all “you”s to have just one shared utility.
Let’s say that not committing suicide gives you −1 utility in n branches but commiting suicide gives you −100 utility in n/m branches and 0 utility in n−n/m branches
If we treat all copies of you as having separate utilities and add them all up for a total expected utility calculation, not committing suicide gives −n utility while committing suicide leads to −100n/m utility. Therefore, as long as m>100, it is better to commit suicide.
If, on the other hand you treat them as having one shared utility, you get either −1 or −100 utility, and −100 is of course worse.
Do you agree that this is the crux? If so, why do you think that all the copies share one utility rather than their utilities adding up?
In a large universe, you do not end. Like, not in expectation see some branch versus other; you just continue, the computation that is you continues. When you open your eyes, you’re not likely to find yourself as a person in a branch computed only relatively rarely; still, that person continues, and does not die.
Attemted suicide reduces your reality-fluid- how much you’re computed and how likely you are to find yourself there- but you will continue to experience the world. If you die in a nuclear explosion, the continuation of you will be somewhere else, sort-of isekaied; and mostly you will find yourself not in a strange world that recovers the dead but in a world where the nuclear explosion did not appear; still, in a large world, even after a nuclear explosion, you continue.
You might care about having a lot of reality-fluid, because this makes your actions more impactful, because you can spend your lightcone better, and improve the average experience in the large universe. You might also assign negative utility to others seeing you die; they’ll have a lot of reality-fluid in worlds where you’re dead and they can’t talk to you, even as you continue. But I don’t think it works out to assigning the same negative utility to dying as in branches of small worlds.
Yes, but the number of copies of you still reduces (or the probability that you are alive in standard probability theory, or the number of branches in many worlds). Why are these not equivalent in terms of the expected utility calculus?
Imagine they you’re an agent in the game of life. Your world, your laws of physics are computed on a very large independent computers; all performing the same computation.
You exist within the laws of causality of your world, computed as long as at least one server computes your world. If some of them stop performing the computation, it won’t be a death of a copy; you’ll just have one fewer instance of yourself.
You are of course right that there’s no difference between reality-fluid and normal probabilities in a small world: it’s just how much you care about various branches relative to each other, regardless of whether all of them will exist or only some.
I claim that the negative utility due to stopping to exist is just not there, because you don’t actually stop to exist in a way you reflectively care about, when you have fewer instances. For normal things (e.g., how much do you care about paperclips), the expected utility is the same; but here, it’s the kind of terminal value that i expect for most people would be different; guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
I claim that the negative utility due to stopping to exist is just not there
But we are not talking about negative utility due to stopping to exist. We are talking about avoiding counterfactual negative utility by committing suicide, which still exists!
guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
I think this is an artifact of thinking of all of the copies having a shared utility (i.e. you) rather than separate utilities that add up (i.e. so many yous will suffer if you don’t commit suicide). If they have separate utilities, we should think of them as separate instances of yourself.
it’s the kind of terminal value that i expect for most people would be different; guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
And even in the case where we are assigning negative utility to death, most people are really considering counterfactual utility from being alive, and 95% of that (expected) counterfactual utility is lost whether 95% of the “instances of you” die or whether there is a 95% chance that “you” die.
I think there is, and I think cultural mores well support this. Separately, I think we shouldn’t legislate morality and though suicide is bad, it should be legal[1].
At the same time, I think some lives are worse than death (for example, see Amanda Luce’s Book Review: Two Arms And A Head that won the ACX book review prize), and so I believe it should be legal and sometimes supported
There also exist cases where it is in fact correct from a utilitarian perspective to kill, but this doesn’t mean there is no deontological rule against killing. We can argue about the specific circumstances where we need these rule carve-outs (eg war), but I think we’d agree that when it comes to politics and policy, there ought to be no carve-outs, since people are particularly bad at risk-return calculations in that domain.
But also this would mean we have to deal with certain liability issues, eg if ChatGPT convinces a kid to kill themselves, we’d like to say this is manslaughter or homicide iff the kid otherwise would’ve gotten better, but how do we determine that? I don’t know, and probably on net we should choose freedom instead, or this isn’t actually much a problem in practice.
Makes sense. I don’t hold this stance; I think my stance is that many/most people are kind of insane on this, but that like with many topics we can just be more sane if we try hard and if some of us set up good institutions around it for helping people have wisdom to lean on in thinking about it, rather than having to do all their thinking themselves with their raw brain.
(I weakly propose we leave it here, as I don’t think I have a ton more to say on this subject right now.)
I don’t think so, I agree we shouldn’t have laws around this, but insofar as we have deontologies to correct for circumstances where historically our naive utility maximizing calculations have been consistently biased, I think there have been enough cases of people uselessly martyring themselves for their causes to justify a deontological rule not to sacrifice your own actual life.
Edit: Basically, I don’t want suicidal people to back-justify batshit insane reasons why they should die to decrease x-risk instead of getting help. And I expect these are the only people who would actually be at risk for a plan which ends with “and then I die, and there is 1% increased probability everyone else gets the good ending”.
I recently read The Sacrifices We Choose to Make by Michael Nielsen, which was a good read. Here are some relevant extracts.
Nielsen also includes unsuccessful or actively repugnant examples of it.
I also think this paragraph about Quang Duc is quite relevant:
I’m not certain if there’s a particular point you want me to take away from this, but thanks for the information, and including an unbiased sample from the article you linked. I don’t think I changed my mind so much from reading this though.
Do you also believe there is a deontological rule against suicide? I have heard rumor that most people who attempt suicide and fail, regret it. At the same time, I think some lives are worse than death (for example, see Amanda Luce’s Book Review: Two Arms And A Head that won the ACX book review prize), and so I believe it should be legal and sometimes supported, even if it were the case that most attempted suicides have been regretted.
After doing some research on this, I think this is unlikely to be true. The only quantitative study I found says that among its sample of suicide attempt survivors, 35.6% are glad to have survived, while 42.7% feel ambivalent, and 21.6% regret having survived. I also found a couple of sources agreeing with your “rumor”, but one cited just a suicide awareness trainer as its source, while the other cited the above study as the only evidence for its claim, somehow interpreting it as “Previous research has found that more than half of suicidal attempters regret their suicidal actions.” (Gemini 2.5 Pro says “It appears the authors of the 2023 paper misinterpreted or misremembered the findings of the 2005 study they cited.”)
If this “rumor” was true, I would expect to see a lot of studies supporting it, because such studies are easy to do and the result would be highly useful for people trying to prevent suicides (i.e., they can use it to convince potential suicide attempters that they’re likely to regret it). Evidence to the contrary are likely to be suppressed or not gathered in the first place, as almost nobody wants to encourage suicides. (The above study gathered the data incidentally, for a different purpose.) So everything seems consistent with the “rumor” being false.
Interesting, thanks. I think I had heard the rumor before and believed it.
In the linked study, it looks like they asked the people about regret very shortly after the suicide attempt. This could both bias the results towards less regret to have survived (little time to change their mind) or more regret to have survived (people might be scared to signal intent to retry suicide, for fear of being committed, which I think sometimes happens soon after failed attempts).
I think very very many people are not making an informed decision when they decide to commit suicide.
For example, I think quantum immortality is quite plausibly a thing. Very few people know about quantum immortality and even fewer have seriously thought about it. This means that almost everyone on the planet might have a very mistaken model of what suicide actually does to their anticipated experience.[1] Also, many people are religious and believe in a pleasant afterlife. Many people considering suicide are mentally ill in a way that compromises their decision making. Many people think transhumanism is impossible and won’t arrange for their brain to be frozen for that reason.
I agree that there is some threshold on the fraction of ill-considered suicides relative to total suicides such that suicide should be legal if we were below that threshold. I used to think we were maybe below that threshold. After I began studying physics at uni and so started taking quantum immortality more seriously, I switched to thinking we are maybe above the threshold.
You might find yourself in a branch where your suicide attempt failed, but a lot of your body and mind were still destroyed. If you keep exponentially decreasing the amplitude of your anticipated future experience in the universal wave function further, you might eventually find that it is now dominated by contributions from weird places and branches far-off in spacetime or configuration space that were formerly negligible, like aliens simulating you for some negotiation or other purpose.
I don’t really know yet how to reason well about what exactly the most likely observed outcome would be here. I do expect that by default, without understanding and careful engineering our civilisation doesn’t remotely have the capability for yet, it’d tend to be very Not Good.
This all feels galaxy-brained to me and like it proves too much. By analogy I feel like if you thought about population ethics for a while and came to counterintuitive conclusions, you might argue that people who haven’t done that shouldn’t be allowed to have children; or if they haven’t thought about timeless decision theory for a while they aren’t allowed to get a carry license.
I don’t think it proves too much. Informed decision-making comes in degrees, and some domains are just harder? Like, I think my threshold for leaving people free to make their own mistakes if they are the only ones harmed by them is very low, compared to where the human population average seems to be at the moment. But my threshold is, in fact, greater than zero.
For example, there are a bunch of things I think bystanders should generally prevent four year old human children from doing, even if the children insist that they want to do them. I know that stopping four year old children from doing these things will be detrimental in some cases, and that having such policies is degrading to the childrens’ agency. I remember what it was like being four years old and feeling miserable because of kindergarten teachers who controlled my day and thought they knew what was best for me. I still think the tradeoff is worth it on net in some cases.
I just think that the suicide thing happens to be a case where doing informed decision-making is maybe just too tough for way too many humans and thus some form of ban could plausibly be worth it on net. Sports betting is another case where I was eventually convinced that maybe a legal ban of some form could be worth it.
(I agree with Lucious in that I think it is important that people have the option of getting cryopreserved and also are aware of all the reality-fluid stuff before they decide to kill themselves.)
“Important” is ambiguous, in that I agree it matters, but it does for this civilization to ban whole life options from people until they have heard about niche philosophy. Most people will never hear about niche philosophy.
I don’t think quantum immortality changes anything. You can rephrame this in terms of standard probability theory and condition on them continuing to have subjective experience, and still get to the same calculus.
However, only considering the branches in which you survive, or conditioning on having subjective experience after the suicide attempt, ignores the counterfactual suffering prevented in all the branches (or probability mass) in which you did die, which may be less unpleasant than the branches in which you survived, but are many many more in number! Ignoring those branches biases the reasoning toward rare survival tails that don’t dominate the actual expected utility.
I agree that quantum mechanics is not really central for this on a philosophical level. You get a pretty similar dynamic just from having a universe that is large enough to contain many almost-identical copies of you. It’s just that it seems at present very unclear and arguable whether the physical universe is in fact anywhere near that large, whereas I would claim that a universal wavefunction which constantly decoheres into different branches containing different versions of us is pretty strongly implied to be a thing by the laws of physics as we currently understand them.
It is very late here and I should really sleep instead of discussing this, so I won’t be able to reply as in-depth as this probably merits. But, basically, I would claim that this is not the right way to do expected utility calculations when it comes to ensembles of identical or almost-identical minds.
A series of thought experiments might maybe help illustrate part of where my position comes from:
Imagine someone tells you that they will put you to sleep and then make two copies of you, identical down to the molecular level. They will place you in a room with blue walls. They will place one copy of you in a room with red walls, and the other copy in another room with blue walls. Then they will wake all three of you up.
What color do you anticipate seeing after you wake up, and with what probability?
I’d say 2⁄3 blue, 1⁄3 red. Because there will now be three versions of me, and until I look at the walls I won’t know which one I am.
Imagine someone tells you that they will put you to sleep and then make two copies of you. One copy will not include a brain. It’s just a dead body with an empty skull. Another copy will be identical to you down to the molecular level. Then they will place you in a room with blue walls, and the living copy in a room with red walls. Then they will wake you and the living copy up.
What color do you anticipate seeing after you wake up, and with what probability? Is there a 1⁄3 probability that you ‘die’ and don’t experience waking up because you might end up ‘being’ the corpse-copy?
I’d say 1⁄2 blue, 1⁄2 red, and there is clearly no probability of me ‘dying’ and not experiencing waking up. It’s just a bunch of biomass that happens to be shaped like me.
As 2, but instead of creating the corpse-copy without a brain, it is created fully intact, then its brain is destroyed while it is still unconscious. Should that change our anticipated experience? Do we now have a 1⁄3 chance of dying in the sense that we might not experience waking up? Is there some other relevant sense in which we die, even if it does not affect our anticipated experience?
I’d say no and no. This scenario is identical to 2 in terms of the relevant information processing that is actually occurring. The corpse-copy will have a brain, but it will never get to use it, so it won’t affect my expected anticipated experience in any way. Adding more dead copies doesn’t change my anticipated experience either. My best scoring prediction will be that I have 1⁄2 chance of waking up to see red walls, and 1⁄2 chance of waking up to see blue walls.
In real life, if you die in the vast majority of branches caused by some event, i.e. that’s where the majority of the amplitude is, but you survive in some, the calculation for your anticipated experience would seem to not include the branches where you die for the same reason it doesn’t include the dead copies in thought experiments 2 and 3.
(I think Eliezer may have written about this somewhere as well using pretty similar arguments, maybe in the quantum physics sequence, but I can’t find it right now.)
Again, not sure why a large universe is needed. The expected utility ends up the same either way, whether you have some fraction of branches in which you remain alive or some probability of remaining alive.
Regarding the expected utility calculus. I agree with everything you said but i don’t see how any of it allows you to disregard the counterfactual suffering from not committing suicide in your expected value calculation.
Maybe the crux is whether we consider the utility of each “you” (i.e. you in each branch) individually, and add it up for the total utility, or wether we consider all “you”s to have just one shared utility.
Let’s say that not committing suicide gives you −1 utility in n branches but commiting suicide gives you −100 utility in n/m branches and 0 utility in n−n/m branches
If we treat all copies of you as having separate utilities and add them all up for a total expected utility calculation, not committing suicide gives −n utility while committing suicide leads to −100n/m utility. Therefore, as long as m>100, it is better to commit suicide.
If, on the other hand you treat them as having one shared utility, you get either −1 or −100 utility, and −100 is of course worse.
Do you agree that this is the crux? If so, why do you think that all the copies share one utility rather than their utilities adding up?
In a large universe, you do not end. Like, not in expectation see some branch versus other; you just continue, the computation that is you continues. When you open your eyes, you’re not likely to find yourself as a person in a branch computed only relatively rarely; still, that person continues, and does not die.
Attemted suicide reduces your reality-fluid- how much you’re computed and how likely you are to find yourself there- but you will continue to experience the world. If you die in a nuclear explosion, the continuation of you will be somewhere else, sort-of isekaied; and mostly you will find yourself not in a strange world that recovers the dead but in a world where the nuclear explosion did not appear; still, in a large world, even after a nuclear explosion, you continue.
You might care about having a lot of reality-fluid, because this makes your actions more impactful, because you can spend your lightcone better, and improve the average experience in the large universe. You might also assign negative utility to others seeing you die; they’ll have a lot of reality-fluid in worlds where you’re dead and they can’t talk to you, even as you continue. But I don’t think it works out to assigning the same negative utility to dying as in branches of small worlds.
Yes, but the number of copies of you still reduces (or the probability that you are alive in standard probability theory, or the number of branches in many worlds). Why are these not equivalent in terms of the expected utility calculus?
Imagine they you’re an agent in the game of life. Your world, your laws of physics are computed on a very large independent computers; all performing the same computation.
You exist within the laws of causality of your world, computed as long as at least one server computes your world. If some of them stop performing the computation, it won’t be a death of a copy; you’ll just have one fewer instance of yourself.
Whats the difference between fewer instances and fewer copies, and why is that load bearing for the expected utility calculation?
You are of course right that there’s no difference between reality-fluid and normal probabilities in a small world: it’s just how much you care about various branches relative to each other, regardless of whether all of them will exist or only some.
I claim that the negative utility due to stopping to exist is just not there, because you don’t actually stop to exist in a way you reflectively care about, when you have fewer instances. For normal things (e.g., how much do you care about paperclips), the expected utility is the same; but here, it’s the kind of terminal value that i expect for most people would be different; guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
But we are not talking about negative utility due to stopping to exist. We are talking about avoiding counterfactual negative utility by committing suicide, which still exists!
I think this is an artifact of thinking of all of the copies having a shared utility (i.e. you) rather than separate utilities that add up (i.e. so many yous will suffer if you don’t commit suicide). If they have separate utilities, we should think of them as separate instances of yourself.
And even in the case where we are assigning negative utility to death, most people are really considering counterfactual utility from being alive, and 95% of that (expected) counterfactual utility is lost whether 95% of the “instances of you” die or whether there is a 95% chance that “you” die.
I think there is, and I think cultural mores well support this. Separately, I think we shouldn’t legislate morality and though suicide is bad, it should be legal[1].
There also exist cases where it is in fact correct from a utilitarian perspective to kill, but this doesn’t mean there is no deontological rule against killing. We can argue about the specific circumstances where we need these rule carve-outs (eg war), but I think we’d agree that when it comes to politics and policy, there ought to be no carve-outs, since people are particularly bad at risk-return calculations in that domain.
But also this would mean we have to deal with certain liability issues, eg if ChatGPT convinces a kid to kill themselves, we’d like to say this is manslaughter or homicide iff the kid otherwise would’ve gotten better, but how do we determine that? I don’t know, and probably on net we should choose freedom instead, or this isn’t actually much a problem in practice.
Makes sense. I don’t hold this stance; I think my stance is that many/most people are kind of insane on this, but that like with many topics we can just be more sane if we try hard and if some of us set up good institutions around it for helping people have wisdom to lean on in thinking about it, rather than having to do all their thinking themselves with their raw brain.
(I weakly propose we leave it here, as I don’t think I have a ton more to say on this subject right now.)