I wonder if it would work to renormalize utility so that the total of everything that’s “at stake” (in some sense that would need to be made more precise) is always worth the same?
Probably this gives too much weight to easy-to-achieve moralities, like the morality that says all that matters is whether you’re happy tomorrow? It also doesn’t accommodate non-consequentalist moralities.
But does it ever make sense to respond to new moral information by saying, “huh, I guess existence as a whole doesn’t matter as much as I thought it did”? It seems counterintuitive somehow.
I can’t follow your comment. I would need some inferential steps filled in, between the prior comment, and the first sentence of your comment, and between every sentence of your comment.
I wonder if it would work to renormalize utility so that the total of everything that’s “at stake” (in some sense that would need to be made more precise) is always worth the same?
Probably this gives too much weight to easy-to-achieve moralities, like the morality that says all that matters is whether you’re happy tomorrow? It also doesn’t accommodate non-consequentalist moralities.
But does it ever make sense to respond to new moral information by saying, “huh, I guess existence as a whole doesn’t matter as much as I thought it did”? It seems counterintuitive somehow.
I can’t follow your comment. I would need some inferential steps filled in, between the prior comment, and the first sentence of your comment, and between every sentence of your comment.