I believe you are missing Kant’s point regarding free will. People have free will. Rocks don’t. And that is why it makes moral sense for you to want a universe with happy people, and not a universe with happy rocks!
People deserve happiness because they are morally responsible for causing happiness. Rocks take no responsibility, hence those of us who do take responsibility are under no obligation to worry about the happiness of rocks.
Utilitarians of the LessWrong variety tend to think that possession of consciousness is important in determining whether some entity deserves our moral respect. Kant tended to think that possession of free will is important.
As a contractarian regarding morals, I lean toward Kant’s position, though I would probably express the idea in different language.
Generally speaking, I’m uneasy about any reduction from a less-confused concept to a more-confused concept. Free will is a more confused concept than moral significance. Also, I can imagine things changing my perspective on free will that would not also change my perspective on moral significance. For example, if we interpret free will as unsolvability by rivals, then the birth of a superintelligence would cause everyone to lose their free will, but have no effect on anyone’s moral significance.
I believe you are missing Kant’s point regarding free will. People have free will. Rocks don’t. And that is why it makes moral sense for you to want a universe with happy people, and not a universe with happy rocks!
People deserve happiness because they are morally responsible for causing happiness. Rocks take no responsibility, hence those of us who do take responsibility are under no obligation to worry about the happiness of rocks.
Utilitarians of the LessWrong variety tend to think that possession of consciousness is important in determining whether some entity deserves our moral respect. Kant tended to think that possession of free will is important.
As a contractarian regarding morals, I lean toward Kant’s position, though I would probably express the idea in different language.
Generally speaking, I’m uneasy about any reduction from a less-confused concept to a more-confused concept. Free will is a more confused concept than moral significance. Also, I can imagine things changing my perspective on free will that would not also change my perspective on moral significance. For example, if we interpret free will as unsolvability by rivals, then the birth of a superintelligence would cause everyone to lose their free will, but have no effect on anyone’s moral significance.