h(t) already accounts for boredom and ‘tired of life’ effects.
You’ve anticipated this:
> One could argue that this should already be factored into “happiness”, but well, it’s not like I actually defined what happiness is. More seriously, perhaps rather than happiness it is better to think of h as the “quality of life”.
But previously:
> h(t) is happiness at time t (for now we think of it as hedonistic utilitarianism, but I propose a preference utilitarianism interpretation later).
I can’t find any place where you did define h(t), but I think the way to follow the intention that you set is to set h(t) such that h(t) equals zero when the moral patient is ambivalent about whether they prefer not being born to a life that is always at that level of happiness. In order to set h(t) equal to quality of life, then happiness would have to equal to quality of life minus the ‘tired of life’ correction.
Objection:
The “−u_0″ term and the ” −h_0([age factor])” term double-count the concept that there are some lives that are worth living but not worth causing to exist. u_0 is the net lifetime utility necessary for a person to be worth creating, and h_0 is the instantaneous quality of life required for existing to be better than not existing; for someone with h(t)-h_0([age factor]) of positive epsilon for their entire life, they are constantly slightly happier to have been born than otherwise, but their net utility over their lifetime is approximately -u_0.
Note that ‘dying sucks’ is already included in h(t), because h(dying) is expected to be very negative. h(suicide) is often even lower than h(death), as revealed by a dive into psychological literature of behavior among people for whom h(suicide)<h(now)<h(death).
Objection: There is no protection against utility monsters, people who have h(t) so high that their existence and increasing their h dominates every other consideration. If that is patched to cap the amount of utility that any one individual can have, there is no protection against utility monster colonies, such that each member of the colony has maximum h(t) and the colony is so numerous that its collective utility dominates every other consideration. (this is a weak objection, since it applies equally well to competing theories of utilitarianism)
A better utilitarianism model might include a weight factor for ‘how much’ of a moral patient an entity is. A rock would have a weight of 0, the scale would be normalized such that the intended audience had a weight of 1. That does allow individual utility to have an upper cap without creating the utility colony problem, since the total moral patient weight of such a colony would have a finite value based on how much its welfare actually dominated the total utility of all moral patients.
Note that ‘dying sucks’ is already included in h(t), because h(dying) is expected to be very negative.
If really you want, you can reinterpret −u0 as some negative spike in h(t) at the time of death, that occurs even if the person died instantly without any awareness of death. I think that maybe my use of the word “happiness” was confusing due to the nebulous nature of this concept and instead I should have talked about something like “quality of life” or “quality of experience” (still nebulous but a little less so, maybe).
There is no protection against utility monsters, people who have h(t) so high that their existence and increasing their h dominates every other consideration. If that is patched to cap the amount of utility that any one individual can have, there is no protection against utility monster colonies, such that each member of the colony has maximum h(t) and the colony is so numerous that its collective utility dominates every other consideration.
I only intended to address particular issues, not give a full theory of ethics (something that is completely infeasible anyway). I think h is already bounded (ofc we cannot verify this without having a definition of h in terms of something else, which is entirely out of scope here). Regarding the “utility monster colony” I don’t see it as a problem at all. It’s just saying “the concerns of a large population dominate the concerns of a small population” which is fine and standard in utilitarianism. The words “utility monster” are not doing any work here.
A better utilitarianism model might include a weight factor for ‘how much’ of a moral patient an entity is.
I agree something like this should be the case, like I said I had no intention to address everything.
Does it matter whether or not, when I go to sleep, that which makes me a moral patient ends, and a new moral patient exists when I wake up?
If a mad scientist ground my body into paste each night and replaced me with a copy that nobody (including me) could tell was different in the morning, how much would that actually suck over the long term?
Having a term outside the integral means caring about uncertain continuity questions, and while the assumed answer to those questions is clear, assuming an answer to the pure question just shifts the question to the new edge case- does a concussion with momentary blackout count as death? Is there no level of temporary brain damage that counts as death? Is deleting a transhumanist brain scan as bad as killing a meat body? (No, it can’t be, unless it’s the last copy- but does the existence of a copy make killing the meat body less bad?)
Does it matter whether or not, when I go to sleep, that which makes me a moral patient ends, and a new moral patient exists when I wake up?
Yes, it matters (btw the answer is “no”). Sure, it is highly non-trivial to pinpoint exactly what is death in reductionist terms. The same is true of “happiness”. But, nobody promised you a rose garden. The utility function is not up for grabs.
Btw, I think there is such a thing as “partial death” and it should be incorportated into the theory.
> If a mad scientist ground my body into paste each night and replaced me with a copy that nobody (including me) could tell was different in the morning, how much would that actually suck over the long term?
Why can the utility differ with no upper bound between two worlds with epsilon observed difference? If I did die every time I slept, there would be certain physical differences that could in theory be measured with a sleep EEG… but I’ve never had a sleep EEG done. In a less convenient world there would be humans whose person died every night to be replaced by a new person, and unless they are happier every day than some people are in their entire life, those humans should be shot permanently dead right away, because they are so damaging to total utility.
If a mad scientist ground you into paste and replaced you by a literally indistinguishable copy, then it doesn’t suck, the copy is still you in the relevant sense. The more different is the copy from the original, the more it sucks, until some point of maximal suckiness where it’s clearly a different person and the old you is clearly dead (it might be asymptotic convergence rather than an actual boundary).
I’m more completely confused- based on that, it would not matter if I died every time I fell asleep, and was replaced by someone identical to me. And If I did decide to make a major personality change, that would be intrinsically bad.
If you “died” in your sleep and were replaced by someone identical to you, then indeed it wouldn’t matter: it doesn’t count as dying in the relevant sense. Regarding a major personality change, I’m not sure what you have in mind. If you decide to take on a new hobby, that’s not dying. If you take some kind of superdrug that reprograms your entire personality then, yes, that’s pretty close to dying.
Why have a strong preference against a world with a few humans who take such a drug repeatedly?
If someone is in a rut and could either commit suicide or take the reprogramming drug (and expects to have to take it four times before randomizing to a personality that is better than rerolling a new one), why is that worse than killing them and allowing a new human to be created?
If someone is in a rut and could either commit suicide or take the reprogramming drug (and expects to have to take it four times before randomizing to a personality that is better than rerolling a new one), why is that worse than killing them and allowing a new human to be created?
If such a drug is so powerful that the new personality is essentially a new person, then you have created a new person whose lifespan will be a normal human lifespan minus however long the original person lived before they got in a rut. By contrast, if they commit suicide and you create a new human, you have created a new person who will likely live a normal human lifespan. So taking the drug even once is clearly worse than suicide + replacement since, all else being equal, it is better to create a someone with a longer lifespan than a shorter one (assuming their lifespan is positive, overall, of course).
It takes extra resource to grow up and learn all the stuff that you’ve learned like K-12 and college education. You can’t guarantee that the new person will be more efficient in using resources to grow than the existing person.
Point taken, but for the average person, the time period of growing up isn’t just a joyless period where they do nothing but train and invest in the future. Most people remember their childhoods as a period of joy and their college years as some of the best of their lives. Growing and learning isn’t just preparation for the future, people find large portions of it to be fun. So the “existing” person would be deprived of all that, whereas the new person would not be.
That can be said about any period in life. It’s just a matter of perspective and circumstances. The best years are never the same for different people.
Most people remember their childhoods as a period of joy and their college years as some of the best of their lives.
This seems more anecdotal, and people becoming jaded as they grow older is a similar assertion in nature
That can be said about any period in life. It’s just a matter of perspective and circumstances. The best years are never the same for different people.
That’s true, but I think that for the overwhelming majority of people, their childhoods and young adulthoods were at the very least good years, even if they’re not always the best. They are years that contain significantly more good than bad for most people. So if you create a new adult who never had a childhood, and whose lifespan is proportionately shorter, they will have a lower total amount of wellbeing over their lifetime than someone who had a full-length life that included a childhood.
Ethics is subjective. I’m not sure what argument I could make that would persuade you, if any, and vice versa. Unless you have some new angle to approach this, it seems pointless to continue the debate.
Objection:
h(t) already accounts for boredom and ‘tired of life’ effects.
You’ve anticipated this:
> One could argue that this should already be factored into “happiness”, but well, it’s not like I actually defined what happiness is. More seriously, perhaps rather than happiness it is better to think of h as the “quality of life”.
But previously:
> h(t) is happiness at time t (for now we think of it as hedonistic utilitarianism, but I propose a preference utilitarianism interpretation later).
I can’t find any place where you did define h(t), but I think the way to follow the intention that you set is to set h(t) such that h(t) equals zero when the moral patient is ambivalent about whether they prefer not being born to a life that is always at that level of happiness. In order to set h(t) equal to quality of life, then happiness would have to equal to quality of life minus the ‘tired of life’ correction.
Objection:
The “−u_0″ term and the ” −h_0([age factor])” term double-count the concept that there are some lives that are worth living but not worth causing to exist. u_0 is the net lifetime utility necessary for a person to be worth creating, and h_0 is the instantaneous quality of life required for existing to be better than not existing; for someone with h(t)-h_0([age factor]) of positive epsilon for their entire life, they are constantly slightly happier to have been born than otherwise, but their net utility over their lifetime is approximately -u_0.
Note that ‘dying sucks’ is already included in h(t), because h(dying) is expected to be very negative. h(suicide) is often even lower than h(death), as revealed by a dive into psychological literature of behavior among people for whom h(suicide)<h(now)<h(death).
Objection: There is no protection against utility monsters, people who have h(t) so high that their existence and increasing their h dominates every other consideration. If that is patched to cap the amount of utility that any one individual can have, there is no protection against utility monster colonies, such that each member of the colony has maximum h(t) and the colony is so numerous that its collective utility dominates every other consideration. (this is a weak objection, since it applies equally well to competing theories of utilitarianism)
A better utilitarianism model might include a weight factor for ‘how much’ of a moral patient an entity is. A rock would have a weight of 0, the scale would be normalized such that the intended audience had a weight of 1. That does allow individual utility to have an upper cap without creating the utility colony problem, since the total moral patient weight of such a colony would have a finite value based on how much its welfare actually dominated the total utility of all moral patients.
See my reply to Dagon.
If really you want, you can reinterpret −u0 as some negative spike in h(t) at the time of death, that occurs even if the person died instantly without any awareness of death. I think that maybe my use of the word “happiness” was confusing due to the nebulous nature of this concept and instead I should have talked about something like “quality of life” or “quality of experience” (still nebulous but a little less so, maybe).
I only intended to address particular issues, not give a full theory of ethics (something that is completely infeasible anyway). I think h is already bounded (ofc we cannot verify this without having a definition of h in terms of something else, which is entirely out of scope here). Regarding the “utility monster colony” I don’t see it as a problem at all. It’s just saying “the concerns of a large population dominate the concerns of a small population” which is fine and standard in utilitarianism. The words “utility monster” are not doing any work here.
I agree something like this should be the case, like I said I had no intention to address everything.
Does it matter whether or not, when I go to sleep, that which makes me a moral patient ends, and a new moral patient exists when I wake up?
If a mad scientist ground my body into paste each night and replaced me with a copy that nobody (including me) could tell was different in the morning, how much would that actually suck over the long term?
Having a term outside the integral means caring about uncertain continuity questions, and while the assumed answer to those questions is clear, assuming an answer to the pure question just shifts the question to the new edge case- does a concussion with momentary blackout count as death? Is there no level of temporary brain damage that counts as death? Is deleting a transhumanist brain scan as bad as killing a meat body? (No, it can’t be, unless it’s the last copy- but does the existence of a copy make killing the meat body less bad?)
Yes, it matters (btw the answer is “no”). Sure, it is highly non-trivial to pinpoint exactly what is death in reductionist terms. The same is true of “happiness”. But, nobody promised you a rose garden. The utility function is not up for grabs.
Btw, I think there is such a thing as “partial death” and it should be incorportated into the theory.
> If a mad scientist ground my body into paste each night and replaced me with a copy that nobody (including me) could tell was different in the morning, how much would that actually suck over the long term?
Why can the utility differ with no upper bound between two worlds with epsilon observed difference? If I did die every time I slept, there would be certain physical differences that could in theory be measured with a sleep EEG… but I’ve never had a sleep EEG done. In a less convenient world there would be humans whose person died every night to be replaced by a new person, and unless they are happier every day than some people are in their entire life, those humans should be shot permanently dead right away, because they are so damaging to total utility.
If a mad scientist ground you into paste and replaced you by a literally indistinguishable copy, then it doesn’t suck, the copy is still you in the relevant sense. The more different is the copy from the original, the more it sucks, until some point of maximal suckiness where it’s clearly a different person and the old you is clearly dead (it might be asymptotic convergence rather than an actual boundary).
I’m more completely confused- based on that, it would not matter if I died every time I fell asleep, and was replaced by someone identical to me. And If I did decide to make a major personality change, that would be intrinsically bad.
If you “died” in your sleep and were replaced by someone identical to you, then indeed it wouldn’t matter: it doesn’t count as dying in the relevant sense. Regarding a major personality change, I’m not sure what you have in mind. If you decide to take on a new hobby, that’s not dying. If you take some kind of superdrug that reprograms your entire personality then, yes, that’s pretty close to dying.
Why have a strong preference against a world with a few humans who take such a drug repeatedly?
If someone is in a rut and could either commit suicide or take the reprogramming drug (and expects to have to take it four times before randomizing to a personality that is better than rerolling a new one), why is that worse than killing them and allowing a new human to be created?
If such a drug is so powerful that the new personality is essentially a new person, then you have created a new person whose lifespan will be a normal human lifespan minus however long the original person lived before they got in a rut. By contrast, if they commit suicide and you create a new human, you have created a new person who will likely live a normal human lifespan. So taking the drug even once is clearly worse than suicide + replacement since, all else being equal, it is better to create a someone with a longer lifespan than a shorter one (assuming their lifespan is positive, overall, of course).
It takes extra resource to grow up and learn all the stuff that you’ve learned like K-12 and college education. You can’t guarantee that the new person will be more efficient in using resources to grow than the existing person.
Point taken, but for the average person, the time period of growing up isn’t just a joyless period where they do nothing but train and invest in the future. Most people remember their childhoods as a period of joy and their college years as some of the best of their lives. Growing and learning isn’t just preparation for the future, people find large portions of it to be fun. So the “existing” person would be deprived of all that, whereas the new person would not be.
That can be said about any period in life. It’s just a matter of perspective and circumstances. The best years are never the same for different people.
This seems more anecdotal, and people becoming jaded as they grow older is a similar assertion in nature
That’s true, but I think that for the overwhelming majority of people, their childhoods and young adulthoods were at the very least good years, even if they’re not always the best. They are years that contain significantly more good than bad for most people. So if you create a new adult who never had a childhood, and whose lifespan is proportionately shorter, they will have a lower total amount of wellbeing over their lifetime than someone who had a full-length life that included a childhood.
Ethics is subjective. I’m not sure what argument I could make that would persuade you, if any, and vice versa. Unless you have some new angle to approach this, it seems pointless to continue the debate.