No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls.
A statement like that needs a mathematical proof.
If it’s a terminal value for most people to suffer and grieve over the loss of individual life
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.
No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
Ah… If you or I knew what to think, we’d be working on CEV right now, and we’d all be much less fucked than we currently are.
A statement like that needs a mathematical proof.
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.