I agree your points on the Sadistic Conclusion issue. Arrhenius acknowledges that his analysis depends on the (to him trivial) assumption that there are “positive” welfare levels. I don’t think this axiom is trivial because it interestingly implies that non-consciousness somehow becomes “tarnished” and non-optimal. Under a Buddhist view of value, this would be different.
Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die?
If all one person cared about was to live for at least 1′000 years, and all a second person cared about was to live for at least 1′000′000 years (and after their desired duration they would become completely indifferent), would the death of the first person at age 500 be less tragic than the death of the second person at age 500′000? I don’t think so, because assuming that they value partial-progress on their ultimate goal the same way, they both ended up reaching “half” of their true and only goal. I don’t think the first person would somehow care less in overall terms about achieving her goal than the second person.
To what extent would this way of comparing preferences change things?
What I’m trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die.
I think the point you make here is important. It seems like there should be a difference between beings who have only one preference and beings who have an awful lot of preferences. Imagine a chimpanzee with a few preferences and compare him to a sentient AGI, say. Would both count equally? If not, how would we determine how much their total preference (dis)satisfaction is worth? The example I gave above seems intuitive because we were talking about humans who are (as specified by the unwritten rules of thought experiments) equal in all relevant respects. With chimps vs. AI it seems different.
I’m actually not sure how I would proceed here, and this is of course a problem. Since I’d (in my preference-utilitarianism mode) only count the preferences of sentient beings and not e.g. the revealed preferences of a tree, I would maybe weight the overall value by something like “intensity of sentience”. However, I suspect that I’m inclined to do this because I have strong leanings towards hedonistic views, so it would not necessarily fit elegantly with a purely preference-based view on what matters. And that would be a problem because I don’t like ad hoc moves.
Or maybe a better way to deal with it would be the following:
Preferences ought to be somewhat specific. If people just say “infinity”, they still aren’t capable to envision what this would actually mean. So maybe a chimpanzee could only envision a certain amount of things because of some limit of brain complexity, while typical humans could envision slightly more stuff, but nothing close to infinity. In order for someone to at a given moment have the preference to live forever, that person then would in this case need an infinitely complex brain to properly envision all this implies. So you’d get an upper bound that prevents the problems you mentioned from arising.
You could argue that humans actually want to live for infinity by making use of personal identity and transitivity (e.g. “if I ask in ten years, the person will want to live for the next ten years and be able to give you detailed plans; and keep repeating that every ten years), but here I’d say we should just try to minimize preference-dissatisfaction of all consciousness-moments, not of persons. I might be talking nonsense with the word “envision”, but something along these lines seems plausible to me too.
The two possibilities you propose don’t seem plausible to me. I have a general aversion to things you’d only come up with in order to fix a specific problem and that wouldn’t seem intuitive from the beginning / from a top-down perspective. I need to think about this further.
I agree your points on the Sadistic Conclusion issue. Arrhenius acknowledges that his analysis depends on the (to him trivial) assumption that there are “positive” welfare levels. I don’t think this axiom is trivial because it interestingly implies that non-consciousness somehow becomes “tarnished” and non-optimal. Under a Buddhist view of value, this would be different.
If all one person cared about was to live for at least 1′000 years, and all a second person cared about was to live for at least 1′000′000 years (and after their desired duration they would become completely indifferent), would the death of the first person at age 500 be less tragic than the death of the second person at age 500′000? I don’t think so, because assuming that they value partial-progress on their ultimate goal the same way, they both ended up reaching “half” of their true and only goal. I don’t think the first person would somehow care less in overall terms about achieving her goal than the second person.
To what extent would this way of comparing preferences change things?
I think the point you make here is important. It seems like there should be a difference between beings who have only one preference and beings who have an awful lot of preferences. Imagine a chimpanzee with a few preferences and compare him to a sentient AGI, say. Would both count equally? If not, how would we determine how much their total preference (dis)satisfaction is worth? The example I gave above seems intuitive because we were talking about humans who are (as specified by the unwritten rules of thought experiments) equal in all relevant respects. With chimps vs. AI it seems different.
I’m actually not sure how I would proceed here, and this is of course a problem. Since I’d (in my preference-utilitarianism mode) only count the preferences of sentient beings and not e.g. the revealed preferences of a tree, I would maybe weight the overall value by something like “intensity of sentience”. However, I suspect that I’m inclined to do this because I have strong leanings towards hedonistic views, so it would not necessarily fit elegantly with a purely preference-based view on what matters. And that would be a problem because I don’t like ad hoc moves.
Or maybe a better way to deal with it would be the following: Preferences ought to be somewhat specific. If people just say “infinity”, they still aren’t capable to envision what this would actually mean. So maybe a chimpanzee could only envision a certain amount of things because of some limit of brain complexity, while typical humans could envision slightly more stuff, but nothing close to infinity. In order for someone to at a given moment have the preference to live forever, that person then would in this case need an infinitely complex brain to properly envision all this implies. So you’d get an upper bound that prevents the problems you mentioned from arising.
You could argue that humans actually want to live for infinity by making use of personal identity and transitivity (e.g. “if I ask in ten years, the person will want to live for the next ten years and be able to give you detailed plans; and keep repeating that every ten years), but here I’d say we should just try to minimize preference-dissatisfaction of all consciousness-moments, not of persons. I might be talking nonsense with the word “envision”, but something along these lines seems plausible to me too.
The two possibilities you propose don’t seem plausible to me. I have a general aversion to things you’d only come up with in order to fix a specific problem and that wouldn’t seem intuitive from the beginning / from a top-down perspective. I need to think about this further.