Sorry for the flippant silly rules comment, i don’t think i quite engaged with what you were trying to do. My excuse is it was well past midnight and I was tired, but frankly i shouldn’t have posted then.
That said I do think you’re barking up the wrong tree. The FrankenWorm isn’t a problem your ethics needs to solve, its evidence that hedonic utilitarianism is flawed. If you take a moral intuition (happiness is good, encourage it), promote it to a unit (utils), then optimize, you get weird conclusions. But happiness was never a unit, it’s a lossy compression of a much messier thing, engagement, meaning, integration with your chosen family, absence of suffering. This is sort of a Goodhart / alignment sort of issue. Garbage in garbage out.
Any aggregation rule produces these edge case absurdities, and ethics isn’t the kind of thing that aggregates cleanly. The ease of finding these paradoxes across frameworks is a feature of trying to make ethics arithmetic, not a deep fact about morality.
If you take a moral intuition (happiness is good, encourage it), promote it to a unit (utils), then optimize, you get weird conclusions. But happiness was never a unit, it’s a lossy compression of a much messier thing
Wouldn’t you still get weird conclusions if you replaced happiness in the argument by “the messier thing that happiness is trying to get at”?
The ease of finding these paradoxes across frameworks is a feature of trying to make ethics arithmetic, not a deep fact about morality.
The problem is that arithmetic is just writing down how we compare things. And we can’t just stop comparing things. There has to be some answer to “is it better to have one happy person or two people with slightly less happiness”, and once you decide that that question has an answer, that’s all you need to get a bad conclusion from it. If you say “then don’t use arithmetic”, that’s equivalent to “we can’t tell which choice is ethical”.
“Wouldn’t you still get weird conclusions if you replaced happiness in the argument by “the messier thing that happiness is trying to get at”?”
Sure, I agree, though I would argue that less reductive the measurement the less weird the outcome. So the closer you get to accuracy in your measurement of “the messier thing that happiness is trying to get at” the better the outcomes of the weird edge cases.
“arithmetic is just writing down how we compare things”
err, a minor point, you can totally compare things without math. Compare the pleasure of holding a baby with the pleasure of solving a hard problem, people will have consistent, different views on the matter without anyone reaching for a calculator.
A more major point you are making is that ethics / morality must produce a total ordering over all possible world states. I disagree with that point. It’s totally fine that Ethics can rank A>B and C>D without ranking A vs C. Some questions just don’t have determinate answers. In the example with the happy people, perhaps there is an acceptable range, say anywhere from one very happy person to 20 sort of happy people. I’m not claiming this range precisely or even that the range is the answer to this one, I don’t know the answer here. I am simply claiming that answers in ethics can be fuzzy or ranges or not an answerable question.
If there is an acceptable range, then there is no repugnant conclusion. The interval breaks the iteration needed to reach the extremes.
It’s totally fine that Ethics can rank A>B and C>D without ranking A vs C.
In the repugnant conclusion case, you are ranking A>B and B>C, which implies that you can rank A>C. It wouldn’t make sense to not be able to rank A>C under these circumstances.
In the example with the happy people, perhaps there is an acceptable range, say anywhere from one very happy person to 20 sort of happy people.
If you do that, then you are able to rank 19 happy people versus 20 slightly less happy ones, but you are not able to rank 20 happy people versus 21 slightly less happy ones. That isn’t logically impossible, but it leads to weird conclusions. For instance, it may mean that if you are comparing 19 people to 20, adding one unchanged person to the comparison changes it into a comparison of 20 to 21 and suddenly you are no longer able to do it. “I am not able to compare these groups” doesn’t suddenly mean that it’s not arithmetic; arithmetic can then be used to figure out what things you can and cannot compare.
I think it should be a total order. Given a choice, one can either try to make one of the two happen, or not particularly care either way—that is, indifference.
With the repugnant conclusion, the problem is that each iteration really does still seem to be an improvement to me. Of course, if you don’t feel this way then it doesn’t work.
The repugnant conclusion (as well as the lifespan version) doesn’t require cardinal preferences (that is, strength of preference), only ordinal ones (that is, just direction of preference). Surely my “true” preferences are transitive—I’m not sure what it would mean for it to be otherwise!
I do feel like having two people live for 79 years is preferable to one that lives for 80, and likewise at each step feel like shaving off a small amount of lifespan for a new person to get to live is good. If I sometow find myself in a world where I need to choose between these, I’m probably not going to be indifferent about the choice. If I’m to reject the conclusion and e.g. not value the FrankenWorm much, then I want to know where it is that my preferences run contrary to my intuition.
The paradox problems don’t seem to be a feature of utilitarianism—plenty of self proclaimed non-utilitarians would endorse sacrificing a little bit of life to save another’s, yet reject going all the way.
If ethics doesn’t aggregate cleanly, then I want to know the messy way that it does aggregate.
I think the question feels off. “Is the FrankenWorm better than Alice” with no “better for what” attached. “What’s better, a duck sized horse or a horse sized duck?” Unanswerable until you specify what for. You could say “for me to ride” or “for me to be safer” and that works.
Hedonic utilitarianism’s move is to say that better is “more total happy experience moments in the universe.” In that case the math works and the FrankenWorm wins.
My background view is that morality is a mix of genetic and culturally evolved tech for coordinating groups of humans living roughly human lives. The “for what” baked into our moral intuitions is roughly “for groups of humans flourishing together”. The FrankenWorm isn’t in the domain the tool was selected for. You’re asking your immune system about cryptocurrency.
This is why your pairwise intuitions chain into a conclusion you reject. The early steps “small sacrifice to save a life” match templates the moral machinery actually has, rescue, sacrifice, kin care. The late steps don’t match anything, morality never met a FrankenWorm. The transitivity argument assumes every step is the same kind of operation. I don’t think it is. Early steps are tractable moral judgments, the late steps have changed the “for what” sneakily in the background, a FrankenWorm is not a human living roughly human lives.
“If ethics doesn’t aggregate cleanly, then I want to know the messy way that it does aggregate.”
I think the answer is that it doesn’t aggregate. Aggregation is what utilitarianism does. Real ethics / morality as experienced by people is a library of pattern matched responses to recognized situations and produces nonsense or uncertainty or fuzziness on unrecognized ones. The repugnant conclusion is one of those unrecognised scenarios.
I think the question feels off. “Is the FrankenWorm better than Alice” with no “better for what” attached.
“If I have to choose between having the Frankenworm exist or having Alice exist, what does ethics require me to choose?”
The transitivity argument assumes every step is the same kind of operation.
If you give up the idea that every step is the same kind of thing, then there must be a step which counts, and a successive and very similar step, which doesn’t count. And that itself will lead to bad conclusions.
The repugnant conclusion is one of those unrecognised scenarios.
The problem here is that you can get the repugnant conclusion by stringing together only recognized scenarios that you individually do think you know how to handle.
Doing A results in a world where Alice exists. Doing B results in a world that is identical except the Frankenworm exists. You must do either A or B (perhaps one of them is to wait without doing the other one.) Does ethics tell you to do A or B here? (Or does it say nothing about what you should do?)
My model of ethics says nothing on choosing between the two states.
I personally dont see the value in a frankinworm. I would choose for it not to exist, if you forced me to choose and that option was otherwise costless to me, but my understanding of morality dosnt inform that choice.
whatever it is that motivates that choice is what I am asking about, with the motivations that prioritize yourself or people especially close to you ‘factored out’. That is what I mean when I talk about ‘morality’, here.
Okay, i think i see where you are coming from, hmmm
Residual preference after direct self interest is factored out could include evolved revulsion, fear of the strange, a strong preference for not summoning eldritch beings beyond my ken and still not be a unified moral function. None of these need cohere into a system that aggregates cleanly.
I think we disagree on the need to order all world states with coherent rules. I think saying idk is sometimes right.
I think that the argument here against “strongly incomplete” preferences is convincing. A strong incompleteness means there’s there states A,B,C such that B is preferable to A, but C is incomparable to both A and B. The post shows that such an agent could randomly self modify in a way that tends to steer the world in a strictly preferable way.
Without a strong incompleteness, any incomparable pair can be treated as if you were indifferent. This, then, is just as much a judgement as saying that one is better or worse.
Unless you reject transitivity? To me, I’m not sure how my preferences could be ‘truly’ intransitive, as opposed to reasoning errors or lossy information about my desires causing me to violate transitivity.
The Wentworth argument runs on agents with stable preferences over well defined states. Strong incompleteness is unstable on those terms.
However my claim is that the comparison itself is malformed as previously described. You can’t money pump someone over a question that doesn’t have an answer of that shape.
Indifference and incomparability aren’t the same thing. My idk is more Incomparability, meaning the question doesn’t have an answer that makes sense. Its like asking which is hotter, the number 7 or the colour blue.
I don’t understand what you mean. Here, I am talking purely of my own preferences.
Some of us care about ‘silly rules’, because we find them compelling.
Sorry for the flippant silly rules comment, i don’t think i quite engaged with what you were trying to do. My excuse is it was well past midnight and I was tired, but frankly i shouldn’t have posted then.
That said I do think you’re barking up the wrong tree. The FrankenWorm isn’t a problem your ethics needs to solve, its evidence that hedonic utilitarianism is flawed. If you take a moral intuition (happiness is good, encourage it), promote it to a unit (utils), then optimize, you get weird conclusions. But happiness was never a unit, it’s a lossy compression of a much messier thing, engagement, meaning, integration with your chosen family, absence of suffering. This is sort of a Goodhart / alignment sort of issue. Garbage in garbage out.
Any aggregation rule produces these edge case absurdities, and ethics isn’t the kind of thing that aggregates cleanly. The ease of finding these paradoxes across frameworks is a feature of trying to make ethics arithmetic, not a deep fact about morality.
Wouldn’t you still get weird conclusions if you replaced happiness in the argument by “the messier thing that happiness is trying to get at”?
The problem is that arithmetic is just writing down how we compare things. And we can’t just stop comparing things. There has to be some answer to “is it better to have one happy person or two people with slightly less happiness”, and once you decide that that question has an answer, that’s all you need to get a bad conclusion from it. If you say “then don’t use arithmetic”, that’s equivalent to “we can’t tell which choice is ethical”.
Sure, I agree, though I would argue that less reductive the measurement the less weird the outcome. So the closer you get to accuracy in your measurement of “the messier thing that happiness is trying to get at” the better the outcomes of the weird edge cases.
err, a minor point, you can totally compare things without math. Compare the pleasure of holding a baby with the pleasure of solving a hard problem, people will have consistent, different views on the matter without anyone reaching for a calculator.
A more major point you are making is that ethics / morality must produce a total ordering over all possible world states. I disagree with that point. It’s totally fine that Ethics can rank A>B and C>D without ranking A vs C. Some questions just don’t have determinate answers. In the example with the happy people, perhaps there is an acceptable range, say anywhere from one very happy person to 20 sort of happy people. I’m not claiming this range precisely or even that the range is the answer to this one, I don’t know the answer here. I am simply claiming that answers in ethics can be fuzzy or ranges or not an answerable question.
If there is an acceptable range, then there is no repugnant conclusion. The interval breaks the iteration needed to reach the extremes.
In the repugnant conclusion case, you are ranking A>B and B>C, which implies that you can rank A>C. It wouldn’t make sense to not be able to rank A>C under these circumstances.
If you do that, then you are able to rank 19 happy people versus 20 slightly less happy ones, but you are not able to rank 20 happy people versus 21 slightly less happy ones. That isn’t logically impossible, but it leads to weird conclusions. For instance, it may mean that if you are comparing 19 people to 20, adding one unchanged person to the comparison changes it into a comparison of 20 to 21 and suddenly you are no longer able to do it. “I am not able to compare these groups” doesn’t suddenly mean that it’s not arithmetic; arithmetic can then be used to figure out what things you can and cannot compare.
Sorites paradoxes are everywhere, heaps, baldness, colour boundaries. We live with them just fine.
This is that, the worm is not a life, Alice is. Middle is muddled. Type difference.
I think it should be a total order. Given a choice, one can either try to make one of the two happen, or not particularly care either way—that is, indifference.
With the repugnant conclusion, the problem is that each iteration really does still seem to be an improvement to me. Of course, if you don’t feel this way then it doesn’t work.
Aha! A crux! Beautiful. Thank you for the discussion :).
The repugnant conclusion (as well as the lifespan version) doesn’t require cardinal preferences (that is, strength of preference), only ordinal ones (that is, just direction of preference). Surely my “true” preferences are transitive—I’m not sure what it would mean for it to be otherwise!
I do feel like having two people live for 79 years is preferable to one that lives for 80, and likewise at each step feel like shaving off a small amount of lifespan for a new person to get to live is good. If I sometow find myself in a world where I need to choose between these, I’m probably not going to be indifferent about the choice. If I’m to reject the conclusion and e.g. not value the FrankenWorm much, then I want to know where it is that my preferences run contrary to my intuition.
The paradox problems don’t seem to be a feature of utilitarianism—plenty of self proclaimed non-utilitarians would endorse sacrificing a little bit of life to save another’s, yet reject going all the way.
If ethics doesn’t aggregate cleanly, then I want to know the messy way that it does aggregate.
I think the question feels off. “Is the FrankenWorm better than Alice” with no “better for what” attached. “What’s better, a duck sized horse or a horse sized duck?” Unanswerable until you specify what for. You could say “for me to ride” or “for me to be safer” and that works.
Hedonic utilitarianism’s move is to say that better is “more total happy experience moments in the universe.” In that case the math works and the FrankenWorm wins.
My background view is that morality is a mix of genetic and culturally evolved tech for coordinating groups of humans living roughly human lives. The “for what” baked into our moral intuitions is roughly “for groups of humans flourishing together”. The FrankenWorm isn’t in the domain the tool was selected for. You’re asking your immune system about cryptocurrency.
This is why your pairwise intuitions chain into a conclusion you reject. The early steps “small sacrifice to save a life” match templates the moral machinery actually has, rescue, sacrifice, kin care. The late steps don’t match anything, morality never met a FrankenWorm. The transitivity argument assumes every step is the same kind of operation. I don’t think it is. Early steps are tractable moral judgments, the late steps have changed the “for what” sneakily in the background, a FrankenWorm is not a human living roughly human lives.
I think the answer is that it doesn’t aggregate. Aggregation is what utilitarianism does. Real ethics / morality as experienced by people is a library of pattern matched responses to recognized situations and produces nonsense or uncertainty or fuzziness on unrecognized ones. The repugnant conclusion is one of those unrecognised scenarios.
“If I have to choose between having the Frankenworm exist or having Alice exist, what does ethics require me to choose?”
If you give up the idea that every step is the same kind of thing, then there must be a step which counts, and a successive and very similar step, which doesn’t count. And that itself will lead to bad conclusions.
The problem here is that you can get the repugnant conclusion by stringing together only recognized scenarios that you individually do think you know how to handle.
You haven’t identified the “for what” here. Please explain your point further.
Sorites paradoxes again.
Yeah, problem being that you are handling each comparison with a flawed model.
Doing A results in a world where Alice exists. Doing B results in a world that is identical except the Frankenworm exists. You must do either A or B (perhaps one of them is to wait without doing the other one.) Does ethics tell you to do A or B here? (Or does it say nothing about what you should do?)
My model of ethics says nothing on choosing between the two states.
I personally dont see the value in a frankinworm. I would choose for it not to exist, if you forced me to choose and that option was otherwise costless to me, but my understanding of morality dosnt inform that choice.
whatever it is that motivates that choice is what I am asking about, with the motivations that prioritize yourself or people especially close to you ‘factored out’. That is what I mean when I talk about ‘morality’, here.
Okay, i think i see where you are coming from, hmmm
Residual preference after direct self interest is factored out could include evolved revulsion, fear of the strange, a strong preference for not summoning eldritch beings beyond my ken and still not be a unified moral function. None of these need cohere into a system that aggregates cleanly.
I think we disagree on the need to order all world states with coherent rules. I think saying idk is sometimes right.
Inaction is still a choice.
I think that the argument here against “strongly incomplete” preferences is convincing. A strong incompleteness means there’s there states A,B,C such that B is preferable to A, but C is incomparable to both A and B. The post shows that such an agent could randomly self modify in a way that tends to steer the world in a strictly preferable way.
Without a strong incompleteness, any incomparable pair can be treated as if you were indifferent. This, then, is just as much a judgement as saying that one is better or worse.
Unless you reject transitivity? To me, I’m not sure how my preferences could be ‘truly’ intransitive, as opposed to reasoning errors or lossy information about my desires causing me to violate transitivity.
The Wentworth argument runs on agents with stable preferences over well defined states. Strong incompleteness is unstable on those terms.
However my claim is that the comparison itself is malformed as previously described. You can’t money pump someone over a question that doesn’t have an answer of that shape.
Indifference and incomparability aren’t the same thing. My idk is more Incomparability, meaning the question doesn’t have an answer that makes sense. Its like asking which is hotter, the number 7 or the colour blue.