The Repugnant Lifespan Conclusion
Certainty: Speculative moral philosophy. So, who knows! It’s mostly unanswered questions anyways.
Which would you choose, if you had to?
1. Alice is born, and lives a happy life for 80 years
2. Alice and Bob are both born, and live equally happy lives for 79 years.
Intuitively, the second seems more appealing. A year of Alice’s life is surely worth Bob’s existence. But if you keep making choices like that, then you’d prefer that Alice, Bob, and Charlie are all born and live for 78 years. You can iterate, reducing how long everyone lives while increasing how many people there are… but then at some point you’d prefer a population where each person lives for (say) 1 minute (or one computation step[1]) over a single person living a full life.
Surely this is silly? This is basically the repugnant conclusion applied to lifespans instead of happiness, but perhaps you bite that bullet but reject this one.
Now consider what I’m currently calling the “FrankenWorm”:[2] imagine a mind that simulates one minute of Alice, then one minute of Bob, then one minute of Charlie, etc. forever—never repeating someone twice.[3]
Surely the worm has less moral significance than a normal person? This feels especially true if it only runs a single computation step of each person.

Would you still love me if I was a FrankenWorm?
Should we somehow treat the FrankenWorm differently from a population of people who all live short lives at once? I don’t think so, but maybe you do—for example, perhaps some way of caring about continuity of experience would see the FrankenWorm as worse, due to constantly breaking the continuity in a way that a population of humans doesn’t.
If you’re hung up on creation here (for example if you take the person-affecting view that it is not morally good to create people because nonexistence isn’t bad “for anyone”, as there isn’t an “anyone”), you could alternatively imagine that Alice and Bob are both currently alive, but Alice has to spend a year of her lifespan to save a teenage Bob from a runaway trolley. Surely it’s good for her to do so? But then, if Alice and Bob both spend a life-year to save Charlie, and Alice, Bob and Charlie spend a life-year to save Dave, and so on, we get the same problem. Likewise, we could compare saving Alice, Bob and Charlie to saving the FrankenWorm.
Before thinking of examples like this, my previous main approximation of my values was that each ‘experience-moment’ gets some value based on whether it’s happy or sad, and then you add up all the experience-moments to see how good a universe-history is. For example, I’d rather be happy for 5 minutes than 1 minute, and rather be happy for 1 minute on both Sunday and Saturday than for only 1 minute on Sunday.[4]
Now, I think it’s perhaps worse (or at least there’s more foregone good) to destroy a person, than the good it is to create a new one. For example, if I kill Charlie and replace him with someone else, then I have made the world worse.
It’s surely still pretty good, at least in some cases, to create people even if they’ll one day die. I still feel this way even if I know that they’ll die soon. But all the FrankenWorm does is create happy people who’ll merely die soon afterwards. So what’s the problem?
For another piece of weirdness: Let’s take for granted my position that I have no problem being replaced by a close enough clone, or being destructively teleported. It seems like I care more about there being a me, somewhere, than I do about there being extra me’s. That is, while I would prefer having a clone over just the one Xela, I would go to much greater lengths to prevent there from being zero Xela[5] in the universe. Similarly, while I would prefer to be alive for the next 60[6] years, over just being alive for all of those years aside from this one, I would go to much greater lengths to ensure that I get ‘revived’ eventually[7] than I would to make sure that I don’t end up in a coma for one year. I feel this way towards others, too.
Additionally, I would go to relatively similar lengths to make sure there are 3 clones instead of 2.
However, if our universe is ‘big’ enough, then it’ll likely to contain many instances of any particular person—so then, should I value me and my clone on Earth like I would the lives of 2 out of
Lastly, the whole notion of “counting people” is suspect—if a brain becomes twice as thick, are there now twice as many people in it?? This would throw everything mentioned previously into question!
I feign no hypotheses about what to do here. Some see this as justification average utilitarianism, but that seems too contrary to my intuition: how can it be bad to create a happy person just because of how happy others are! Suppose Omega told me that the only life out there in the universe[8] was a glorious transhumanist civilization in Alpha Centauri that miraculously had a copy of everyone on Earth. Surely I shouldn’t then stop wanting to live, or want to destroy the world in hopes of raising average utility??[9]
- ^
Probably it’s discrete?
- ^
It’s a “worm” because if you imagined Alice, Bob, Charlie, etc. as paths through spacetime, they would each look like “worms”. The FrankenWorm is then a diced up version of each of their worms, best served with garlic and a side of marinara sauce.
- ^
This is basically the time-like version of the space-like lifespan repugnant conclusion, if you get what I mean.
- ^
Here I’m taking the hedonic-utilitarian view. However, I don’t think I’m in favor of wireheading, and most ways to account for that seem to require looking at ‘the whole worm’. That is, not just a sum/integral of a function of the current moment/local stretch of time, but instead moments at different times can ‘interact’ - for example, perhaps experiences have to be non-repetitive.
I still think I’m mostly a hedonic-utilitarian. - ^
It’s somewhat funnier for the plural of Xela to be Xela, and thus, it shall be.
- ^
and more, of course! Here we’re pretending we’ll get a normal future, with FrankenWorms and clones but not antiagathics.
…still less strange than what it’s actually looking like it’ll be. - ^
With some confusion about how long I need to live afterwards for it to be worth it.
- ^
Aside from Earthlings, of course.
- ^
If you’re a negative utilitarian, you may in fact wish the planet was lifeless. For this example, pretend that everyone here lived a life worth living according to you.
Idk man seems like your asking to much from a social technology that encourages / discourages behavior by labeling it good or bad (morality). Through such a lens how is it possible to answer such questions in a universal sense.
What do you think? Own your judgment. Dont get trapped by silly rules.
I don’t understand what you mean. Here, I am talking purely of my own preferences.
Some of us care about ‘silly rules’, because we find them compelling.
Sorry for the flippant silly rules comment, i don’t think i quite engaged with what you were trying to do. My excuse is it was well past midnight and I was tired, but frankly i shouldn’t have posted then.
That said I do think you’re barking up the wrong tree. The FrankenWorm isn’t a problem your ethics needs to solve, its evidence that hedonic utilitarianism is flawed. If you take a moral intuition (happiness is good, encourage it), promote it to a unit (utils), then optimize, you get weird conclusions. But happiness was never a unit, it’s a lossy compression of a much messier thing, engagement, meaning, integration with your chosen family, absence of suffering. This is sort of a Goodhart / alignment sort of issue. Garbage in garbage out.
Any aggregation rule produces these edge case absurdities, and ethics isn’t the kind of thing that aggregates cleanly. The ease of finding these paradoxes across frameworks is a feature of trying to make ethics arithmetic, not a deep fact about morality.
Wouldn’t you still get weird conclusions if you replaced happiness in the argument by “the messier thing that happiness is trying to get at”?
The problem is that arithmetic is just writing down how we compare things. And we can’t just stop comparing things. There has to be some answer to “is it better to have one happy person or two people with slightly less happiness”, and once you decide that that question has an answer, that’s all you need to get a bad conclusion from it. If you say “then don’t use arithmetic”, that’s equivalent to “we can’t tell which choice is ethical”.
Sure, I agree, though I would argue that less reductive the measurement the less weird the outcome. So the closer you get to accuracy in your measurement of “the messier thing that happiness is trying to get at” the better the outcomes of the weird edge cases.
err, a minor point, you can totally compare things without math. Compare the pleasure of holding a baby with the pleasure of solving a hard problem, people will have consistent, different views on the matter without anyone reaching for a calculator.
A more major point you are making is that ethics / morality must produce a total ordering over all possible world states. I disagree with that point. It’s totally fine that Ethics can rank A>B and C>D without ranking A vs C. Some questions just don’t have determinate answers. In the example with the happy people, perhaps there is an acceptable range, say anywhere from one very happy person to 20 sort of happy people. I’m not claiming this range precisely or even that the range is the answer to this one, I don’t know the answer here. I am simply claiming that answers in ethics can be fuzzy or ranges or not an answerable question.
If there is an acceptable range, then there is no repugnant conclusion. The interval breaks the iteration needed to reach the extremes.
In the repugnant conclusion case, you are ranking A>B and B>C, which implies that you can rank A>C. It wouldn’t make sense to not be able to rank A>C under these circumstances.
If you do that, then you are able to rank 19 happy people versus 20 slightly less happy ones, but you are not able to rank 20 happy people versus 21 slightly less happy ones. That isn’t logically impossible, but it leads to weird conclusions. For instance, it may mean that if you are comparing 19 people to 20, adding one unchanged person to the comparison changes it into a comparison of 20 to 21 and suddenly you are no longer able to do it. “I am not able to compare these groups” doesn’t suddenly mean that it’s not arithmetic; arithmetic can then be used to figure out what things you can and cannot compare.
Sorites paradoxes are everywhere, heaps, baldness, colour boundaries. We live with them just fine.
This is that, the worm is not a life, Alice is. Middle is muddled. Type difference.
I think it should be a total order. Given a choice, one can either try to make one of the two happen, or not particularly care either way—that is, indifference.
With the repugnant conclusion, the problem is that each iteration really does still seem to be an improvement to me. Of course, if you don’t feel this way then it doesn’t work.
Aha! A crux! Beautiful. Thank you for the discussion :).
The repugnant conclusion (as well as the lifespan version) doesn’t require cardinal preferences (that is, strength of preference), only ordinal ones (that is, just direction of preference). Surely my “true” preferences are transitive—I’m not sure what it would mean for it to be otherwise!
I do feel like having two people live for 79 years is preferable to one that lives for 80, and likewise at each step feel like shaving off a small amount of lifespan for a new person to get to live is good. If I sometow find myself in a world where I need to choose between these, I’m probably not going to be indifferent about the choice. If I’m to reject the conclusion and e.g. not value the FrankenWorm much, then I want to know where it is that my preferences run contrary to my intuition.
The paradox problems don’t seem to be a feature of utilitarianism—plenty of self proclaimed non-utilitarians would endorse sacrificing a little bit of life to save another’s, yet reject going all the way.
If ethics doesn’t aggregate cleanly, then I want to know the messy way that it does aggregate.
I think the question feels off. “Is the FrankenWorm better than Alice” with no “better for what” attached. “What’s better, a duck sized horse or a horse sized duck?” Unanswerable until you specify what for. You could say “for me to ride” or “for me to be safer” and that works.
Hedonic utilitarianism’s move is to say that better is “more total happy experience moments in the universe.” In that case the math works and the FrankenWorm wins.
My background view is that morality is a mix of genetic and culturally evolved tech for coordinating groups of humans living roughly human lives. The “for what” baked into our moral intuitions is roughly “for groups of humans flourishing together”. The FrankenWorm isn’t in the domain the tool was selected for. You’re asking your immune system about cryptocurrency.
This is why your pairwise intuitions chain into a conclusion you reject. The early steps “small sacrifice to save a life” match templates the moral machinery actually has, rescue, sacrifice, kin care. The late steps don’t match anything, morality never met a FrankenWorm. The transitivity argument assumes every step is the same kind of operation. I don’t think it is. Early steps are tractable moral judgments, the late steps have changed the “for what” sneakily in the background, a FrankenWorm is not a human living roughly human lives.
I think the answer is that it doesn’t aggregate. Aggregation is what utilitarianism does. Real ethics / morality as experienced by people is a library of pattern matched responses to recognized situations and produces nonsense or uncertainty or fuzziness on unrecognized ones. The repugnant conclusion is one of those unrecognised scenarios.
“If I have to choose between having the Frankenworm exist or having Alice exist, what does ethics require me to choose?”
If you give up the idea that every step is the same kind of thing, then there must be a step which counts, and a successive and very similar step, which doesn’t count. And that itself will lead to bad conclusions.
The problem here is that you can get the repugnant conclusion by stringing together only recognized scenarios that you individually do think you know how to handle.
You haven’t identified the “for what” here. Please explain your point further.
Sorites paradoxes again.
Yeah, problem being that you are handling each comparison with a flawed model.
Doing A results in a world where Alice exists. Doing B results in a world that is identical except the Frankenworm exists. You must do either A or B (perhaps one of them is to wait without doing the other one.) Does ethics tell you to do A or B here? (Or does it say nothing about what you should do?)
My model of ethics says nothing on choosing between the two states.
I personally dont see the value in a frankinworm. I would choose for it not to exist, if you forced me to choose and that option was otherwise costless to me, but my understanding of morality dosnt inform that choice.
whatever it is that motivates that choice is what I am asking about, with the motivations that prioritize yourself or people especially close to you ‘factored out’. That is what I mean when I talk about ‘morality’, here.
Okay, i think i see where you are coming from, hmmm
Residual preference after direct self interest is factored out could include evolved revulsion, fear of the strange, a strong preference for not summoning eldritch beings beyond my ken and still not be a unified moral function. None of these need cohere into a system that aggregates cleanly.
I think we disagree on the need to order all world states with coherent rules. I think saying idk is sometimes right.
Inaction is still a choice.
I think that the argument here against “strongly incomplete” preferences is convincing. A strong incompleteness means there’s there states A,B,C such that B is preferable to A, but C is incomparable to both A and B. The post shows that such an agent could randomly self modify in a way that tends to steer the world in a strictly preferable way.
Without a strong incompleteness, any incomparable pair can be treated as if you were indifferent. This, then, is just as much a judgement as saying that one is better or worse.
Unless you reject transitivity? To me, I’m not sure how my preferences could be ‘truly’ intransitive, as opposed to reasoning errors or lossy information about my desires causing me to violate transitivity.
The Wentworth argument runs on agents with stable preferences over well defined states. Strong incompleteness is unstable on those terms.
However my claim is that the comparison itself is malformed as previously described. You can’t money pump someone over a question that doesn’t have an answer of that shape.
Indifference and incomparability aren’t the same thing. My idk is more Incomparability, meaning the question doesn’t have an answer that makes sense. Its like asking which is hotter, the number 7 or the colour blue.
Human intuition around non-continuous forms of life seems almost non-existent. Btw, is the P in your name for Producer then