You seem to have re invented some of the arguments for deontology. In particular, this...
The problem with killing grandma is that nobody wants to live in a world where you kill grandma
...is almost Rawls’s veil. Well maybe deontology flavoured utilitarianism us best kind. But maybe deontology flavoured utilitarianism is actually utilitarianism flavoured deontology.
That’s exactly what I came here to comment. But I think all these problems really come from a weird, abstract idea of “welfare” that imo doesn’t make any sense. It’s the volition of sentient beings that is important. Beings ought to get what they want to the extent that this doesn’t interfere with the same right in others. When two (or more) beings cannot both get what they want, they are obligated to try to find an acceptable compromise. (If they are not of equal degrees of intelligence, of course, the burden of ethical behavior is shared unequally—humans are responsible for treating small children or nonhuman animals correctly but they, being unable to understand moral rules, are not responsible for treating us correctly in turn.)
It is simply not reasonable to unilaterally impose one’s own desires onto others—it breaks that foundational rule. In principle we should maximize getting-what-they-want-ness across all beings, but to do so in a way that blatantly disregards the right not to get imposed on is obscene.
That is: the right to get what you want and not get what you don’t want, is what generalizes to consequentialism over volition—not the other way around. That right is primary and must be respected as much as possible in every single instance. And all these examples disrespect that right in some way. The debate team guy made an agreement which he then broke without renegotiating it first; the people getting killed to help others did not agree that this should occur; etc.
To put it another way: people can do whatever they want as long as they don’t break contracts they have negotiated with other people. (In practice not every social contract is actually agreed to—nobody signs a contract saying they won’t murder anyone—but that’s part of how our world is not currently maximally ethical in its arrangement.) However, what people ought to do (but are not imo obligated to do) is that which, relative to their subjective knowledge, maximizes the total get-what-you-want-ness for everyone conditional upon obeying that first rule.
I need a little more explanation about what you’re intending to say here. Can you specifically tell me what you think is wrong or unworkable about what I said, and why?
If an entity has imperfect insight into its own needs and desires, then it can be beneficial for it to impose on it what it thinks it doesn’t want, or to keep it from what it mistakenly things it does want. That’s generally built into adult-child relations, but adults are not equally omniscience, so the problem does not disappear.
If indeed you know of a course of action that would benefit someone more than the course they currently want to go on, you can provide them an incentive to change their mind willingly. A bet would do: “If you try X instead and afterward don’t honestly think it’s better than the Y you thought you wanted, I’ll pay Z in recompense for your troubles.” (Of course, I’m skeptical that me-in-the-future has any right to define what was best retroactively for me-in-the-past, due to not actually being exactly the same person, but let’s just assume that for now.)
This is totally ethical and does not infringe upon subjective freedom of will. I do not think anyone has the right to force anyone else to change their mind or act against what they believe they want unless their preferred course of action would actually endanger their life (as in the case of a parent picking up their toddler who walks into the road). Even if they’re wrong, it’s their responsibility to be wrong and learn from it, not be saved from their own not-yet-made mistakes.
I haven’t yet decided if interfering with intentional suicide is ethical or not. (My suspicion is that suicide is immoral, as it is murder of all one’s possible future selves who would not, were they present now, consent to being prevented from existing, meaning that preventing suicide is likely an acceptable tradeoff protecting their rights while infringing upon those of the suicidal person. But it will take more thought.)
To me it seems that the individual is always the arbiter of what is best for them. Only that individual—not anyone else, not even an AI modeling their mind. Of course, a sufficiently powerful AI would easily be able to convince them to desire different things using that mind model, but the extrapolated volition is nonetheless not legitimate until willingly accepted by the person—the AI does not have the right to implement it independently without consent. (And I, personally, would not give blanket consent for an AI to manage my affairs.)
Hmm. That suicide example does present a way in which your view here could be interpreted as true within my framework, now that I think about it. But since I don’t consider entities to be identical to past or future versions of themselves, it sounded very wrong to me. Nobody can be wrong about what they want right now. But people can be mistaken about what future versions of themselves would have wanted them to do right now, due to lack of knowledge about the future, and inasmuch as you consider yourself, though not identical to them, to be continuous with them (the same person “in essence”), you ought to take their desires into account—and since you can be mistaken about that, others who can prove they know better about that matter have the right to interfere on their behalf… but only those future selves have the right to say whether that interference was legitimate or not. Hence the bet I described at the beginning. Interesting! Thanks for the opportunity to think about this.
You seem to have re invented some of the arguments for deontology. In particular, this...
...is almost Rawls’s veil. Well maybe deontology flavoured utilitarianism us best kind. But maybe deontology flavoured utilitarianism is actually utilitarianism flavoured deontology.
It was at that point I thought, “we’ve rediscovered Kant’s categorical imperative.”
That’s exactly what I came here to comment. But I think all these problems really come from a weird, abstract idea of “welfare” that imo doesn’t make any sense. It’s the volition of sentient beings that is important. Beings ought to get what they want to the extent that this doesn’t interfere with the same right in others. When two (or more) beings cannot both get what they want, they are obligated to try to find an acceptable compromise. (If they are not of equal degrees of intelligence, of course, the burden of ethical behavior is shared unequally—humans are responsible for treating small children or nonhuman animals correctly but they, being unable to understand moral rules, are not responsible for treating us correctly in turn.)
It is simply not reasonable to unilaterally impose one’s own desires onto others—it breaks that foundational rule. In principle we should maximize getting-what-they-want-ness across all beings, but to do so in a way that blatantly disregards the right not to get imposed on is obscene.
That is: the right to get what you want and not get what you don’t want, is what generalizes to consequentialism over volition—not the other way around. That right is primary and must be respected as much as possible in every single instance. And all these examples disrespect that right in some way. The debate team guy made an agreement which he then broke without renegotiating it first; the people getting killed to help others did not agree that this should occur; etc.
To put it another way: people can do whatever they want as long as they don’t break contracts they have negotiated with other people. (In practice not every social contract is actually agreed to—nobody signs a contract saying they won’t murder anyone—but that’s part of how our world is not currently maximally ethical in its arrangement.) However, what people ought to do (but are not imo obligated to do) is that which, relative to their subjective knowledge, maximizes the total get-what-you-want-ness for everyone conditional upon obeying that first rule.
You seem to be assuming that everyone is a competent adult.
I need a little more explanation about what you’re intending to say here. Can you specifically tell me what you think is wrong or unworkable about what I said, and why?
If an entity has imperfect insight into its own needs and desires, then it can be beneficial for it to impose on it what it thinks it doesn’t want, or to keep it from what it mistakenly things it does want. That’s generally built into adult-child relations, but adults are not equally omniscience, so the problem does not disappear.
If indeed you know of a course of action that would benefit someone more than the course they currently want to go on, you can provide them an incentive to change their mind willingly. A bet would do: “If you try X instead and afterward don’t honestly think it’s better than the Y you thought you wanted, I’ll pay Z in recompense for your troubles.” (Of course, I’m skeptical that me-in-the-future has any right to define what was best retroactively for me-in-the-past, due to not actually being exactly the same person, but let’s just assume that for now.)
This is totally ethical and does not infringe upon subjective freedom of will. I do not think anyone has the right to force anyone else to change their mind or act against what they believe they want unless their preferred course of action would actually endanger their life (as in the case of a parent picking up their toddler who walks into the road). Even if they’re wrong, it’s their responsibility to be wrong and learn from it, not be saved from their own not-yet-made mistakes.
I haven’t yet decided if interfering with intentional suicide is ethical or not. (My suspicion is that suicide is immoral, as it is murder of all one’s possible future selves who would not, were they present now, consent to being prevented from existing, meaning that preventing suicide is likely an acceptable tradeoff protecting their rights while infringing upon those of the suicidal person. But it will take more thought.)
To me it seems that the individual is always the arbiter of what is best for them. Only that individual—not anyone else, not even an AI modeling their mind. Of course, a sufficiently powerful AI would easily be able to convince them to desire different things using that mind model, but the extrapolated volition is nonetheless not legitimate until willingly accepted by the person—the AI does not have the right to implement it independently without consent. (And I, personally, would not give blanket consent for an AI to manage my affairs.)
Hmm. That suicide example does present a way in which your view here could be interpreted as true within my framework, now that I think about it. But since I don’t consider entities to be identical to past or future versions of themselves, it sounded very wrong to me. Nobody can be wrong about what they want right now. But people can be mistaken about what future versions of themselves would have wanted them to do right now, due to lack of knowledge about the future, and inasmuch as you consider yourself, though not identical to them, to be continuous with them (the same person “in essence”), you ought to take their desires into account—and since you can be mistaken about that, others who can prove they know better about that matter have the right to interfere on their behalf… but only those future selves have the right to say whether that interference was legitimate or not. Hence the bet I described at the beginning. Interesting! Thanks for the opportunity to think about this.