I welcome criticism of my new personal favorite population axiology:
The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person’s life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it’s allowed to depend on whether the person’s experiences are veridical.
This axiology implies that it’s important to ensure that the future will contain many people who have better lives than us; it’s consistent with preferring to extend someone’s life by N years rather than creating a new life that lasts N years. It’s immune to Parfit’s Repugnant Conclusion, but doesn’t automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.
There are straightforward modifications for dealing with general relativity and splitting and merging people.
The one flaw is that it’s temporally consistent: If future generations average the welfare of lives after their “present moments”, they will make decisions we disapprove of.
I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don’t like my robot. Good thing?
A person that has a life worth living could have the welfare of their life increase monotonically with their lifespan. In that case, ending a life usually makes the world-history worse.
If future generations average the welfare of lives after their “present moments”, they will make decisions we disapprove of.
Can you give an example? It seems to me that if they decide at t_1 to maximise average welfare from t_1 to ∞, then given that welfare from t_0 to t_1 is held fixed, that decision will also maximise average welfare from t_0 to ∞.
Edit: oh, I was thinking of an average over time, not people.
Earth produces a long and prosperous civilization. After nearly all the resources are used up, the lean and hardscrapple survivors reason, “let’s figure out how to squeeze the last bits of computation out of the environment so that our children will enjoy a better life than us before our species goes extinct”. But from our perspective, those children won’t have as much welfare as the vast majority of human lives in our future, so those children being born would bring our average down. We would will the hardscrapple survivors to not produce more people.
it’s important to ensure that the future will contain many people who have better lives than us
Are you swiping the complexity of value under the terms “better” and “veridical”? Does following your axiology prevent humanity from evolving into a race of happy-go-lucky clones?
Yes, I mean (for example) that if a person believes they’re married to someone, their life’s welfare could depend on whether their spouse is a real person or if it’s a simple chatbot. Also, if a person feels that they’ve discovered a deep insight, their life’s welfare could depend on whether they have actually discovered such an insight.
I welcome criticism of my new personal favorite population axiology:
The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person’s life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it’s allowed to depend on whether the person’s experiences are veridical.
This axiology implies that it’s important to ensure that the future will contain many people who have better lives than us; it’s consistent with preferring to extend someone’s life by N years rather than creating a new life that lasts N years. It’s immune to Parfit’s Repugnant Conclusion, but doesn’t automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.
There are straightforward modifications for dealing with general relativity and splitting and merging people.
The one flaw is that it’s temporally consistent: If future generations average the welfare of lives after their “present moments”, they will make decisions we disapprove of.
I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don’t like my robot. Good thing?
A person that has a life worth living could have the welfare of their life increase monotonically with their lifespan. In that case, ending a life usually makes the world-history worse.
Can you give an example? It seems to me that if they decide at t_1 to maximise average welfare from t_1 to ∞, then given that welfare from t_0 to t_1 is held fixed, that decision will also maximise average welfare from t_0 to ∞.
Edit: oh, I was thinking of an average over time, not people.
Earth produces a long and prosperous civilization. After nearly all the resources are used up, the lean and hardscrapple survivors reason, “let’s figure out how to squeeze the last bits of computation out of the environment so that our children will enjoy a better life than us before our species goes extinct”. But from our perspective, those children won’t have as much welfare as the vast majority of human lives in our future, so those children being born would bring our average down. We would will the hardscrapple survivors to not produce more people.
Are you swiping the complexity of value under the terms “better” and “veridical”? Does following your axiology prevent humanity from evolving into a race of happy-go-lucky clones?
Yes. It’s hard enough to come up with a decent way of aggregating individual welfares without making a comprehensive theory of value.
Is this different from whether their perception of their experiences is correct, or is it jargon?
Yes, I mean (for example) that if a person believes they’re married to someone, their life’s welfare could depend on whether their spouse is a real person or if it’s a simple chatbot. Also, if a person feels that they’ve discovered a deep insight, their life’s welfare could depend on whether they have actually discovered such an insight.
So it’s just jargon. OK.