Yes. At least as long as there are problems in the world. What’s wrong with that?
Everyone, including nonhumans, would have their interests/welfare-function fulfilled as well as possible. If I had to determine the utility function of moral agents before being placed into the world in any position at random, I would choose some form of utilitarianism from a selfish point of view because it maximizes my expected well-being. If doing the “morally right” thing doesn’t make the world a better place for the sentient beings in the world, I don’t see a reason to call it “right”. Also note that this is not an all-or-nothing issue, it seems unfruitful to single out only those actions that produce the perfect outcome, or the perfect outcome in expectation. Every improvement into the right direction counts, because every improvement leads to someone else being better off.
Yes. At least as long as there are problems in the world. What’s wrong with that?
Everyone, including nonhumans, would have their interests/welfare-function fulfilled as well as possible. If I had to determine the utility function of moral agents before being placed into the world in any position at random, I would choose some form of utilitarianism from a selfish point of view because it maximizes my expected well-being. If doing the “morally right” thing doesn’t make the world a better place for the sentient beings in the world, I don’t see a reason to call it “right”. Also note that this is not an all-or-nothing issue, it seems unfruitful to single out only those actions that produce the perfect outcome, or the perfect outcome in expectation. Every improvement into the right direction counts, because every improvement leads to someone else being better off.