Hmm. I don’t think those priorities are necessarily circular, but that doesn’t seem to mean they are correct: I seemed to have similar priorities, but after writing all of them out in this post, I think I may need to reconsider them: They seem quite depressing. (It’s also entirely possible these AREN’T similar, in which case I still need to reconsider mine, but perhaps you don’t need to reconsider this.)
They seem similar to those of a selfish ruler who cares mostly about his fellow nobles, but is concerned about being too selfish in helping his fellow nobles because if there is a majority of unhappy peasants, that’s revolting. I’ll list 3 priorities from A-C.
Priority A: If there are peasants, I don’t want there to be a majority of them who are unhappy because of nobles, because that would hurt the happiness of nobles.
Priority B: I want there to be more happy nobles.
Priority C: I want there to be more peasants who are happy as well.
Now let me consider your points:
(1) It is better to create a small population of creatures with complex humane values (that has positive welfare) than a large population of animals that can only experience pleasure or pain, even if the large population of animals has a greater total amount of positive welfare. For instance, it is better to create a population of humans with 50 total welfare than a population of animals with 100 total welfare.
1 Fits. Priority B is more important than Priority C.
(2) It is bad to create a small population of creatures with humane values (that has positive welfare) and a large population of animals that are in pain. For instance, it is bad to create a population of animals with −75 total welfare, even if doing so allows you to create a population of humans with 50 total welfare.
2 Fits. If the only way to do Priority B is to do break Priority A, don’t do it.
(3) However, it seems like, if creating human beings wasn’t an option, that it might be okay to create a very large population of animals, the majority of which have positive welfare, but the some of which are in pain. For instance, it seems like it would be good to create a population of animals where one section of the population has 100 total welfare, and another section has −75, since the total welfare is 25.
3 Fits. If you can do Priority C without violating Priority A, then do it.
In essence: When someone is below you in some way: (A peasant, an animal, an other) you may end up helping them not because you want them to be happy, but because you are afraid of what would happen if they are not happy enough. Even if the fear is “But if they aren’t happy enough, I’d feel guilty.” but you don’t dislike them. If you can give them happiness at no cost to you, sure.
Now that I’ve typed that out, I’m fairly sure I’ve acted like that on several different occasions. Which really makes me think I need to reevaluate my ethics. Because I act like that, but I can’t actually justify acting like that now that I’ve noticed myself acting like that.
Although the reason it may be hard to justify is that I have to explain to others who are somewhat distant, “Well, I care about you, but I don’t really CARE care, you know? Because of X, Y, and Z you see. You understand.” That’s not really something that’s polite to say to someone’s face, so if you say in public “Yeah, I don’t really care about everyone who isn’t my friends and family, I just want to be sufficiently nice to all of you that I don’t get hurt out of spite or guilt.” That’s offensive.
And what really bothers me is that if I want to be honest, I do the same things to people that I care about at least some of the time, and they do the same thing to me some of the time to.
You’re right to say that treating actual people that way seems pretty unpleasant. But the examples I gave involved creating new people and animals, not differentially treating existing people and animals in some fashion.
I don’t see refusing to create a new person because of a certain set of priorities as morally equivalent to disrespectfully treating existing persons because of that set of priorities. If you say “I care about you, but I don’t really CARE, you know?” to a person you’ve decided to not create, who are you talking to? Who have you been rude to? Who have you been offensive to? Nobody, that’s who.
I agree with you that we have a duty to treat people with care and respect in real life. That’s precisely why I oppose creating new people in certain circumstances. Because once they’re created there is no turning back, you have to show them that respect and care, and you have to be genuine about it, even if doing so messes up your other priorities. I want to make sure that my duty to others remains compatible with my other priorities. And I don’t see anything wrong with creating slightly less people in order to do so.
Or to put it another way, it seems like I have a switch that, when flipped, makes me go from considering a person in the sort of cynical way you described to considering them a fellow person that I love and care for and respect with all my heart. And what flips that switch from “cynical” to “loving and compassionate” is an affirmative answer to the question “Does this person actually exist, or are they certain to exist in the future?” I don’t see this as a moral failing. Nonexistant people don’t mind if you don’t love or respect them.
Although the reason it may be hard to justify is that I have to explain to others who are somewhat distant, “Well, I care about you, but I don’t really CARE care, you know? Because of X, Y, and Z you see. You understand.” That’s not really something that’s polite to say to someone’s face, so if you say in public “Yeah, I don’t really care about everyone who isn’t my friends and family, I just want to be sufficiently nice to all of you that I don’t get hurt out of spite or guilt.” That’s offensive.
I see it as an unfortunate fact of limited resources. Caring about the entire world in enough detail is impossible because each of us only has a few neurons for every other person on Earth. Until we can engineer ourselves for more caring we will have to be satisfied with putting most other people into classes such that we care about the class as a whole and leave individual caring up to other individuals in that class. Being sufficiently nice to any particular individual of a class is probably the most you can rationally do for them barring extenuating evidence. If they are obviously out of the norm for their class (starving, injured, in danger, etc.) and you can immediately help them significantly more efficiently than another member of the class then you should probably spend more caring on them. This avoids the bystander effect by allowing you to only care about overall accidents and injuries in the class in general but also care specifically about an individual if you happen to be in a good position to help.
Otherwise try to maximize the utility of the class as a whole with regard to your ability to efficiently affect their utility, weighted against your other classes and individuals appropriately.
Hmm. I don’t think those priorities are necessarily circular, but that doesn’t seem to mean they are correct: I seemed to have similar priorities, but after writing all of them out in this post, I think I may need to reconsider them: They seem quite depressing. (It’s also entirely possible these AREN’T similar, in which case I still need to reconsider mine, but perhaps you don’t need to reconsider this.)
They seem similar to those of a selfish ruler who cares mostly about his fellow nobles, but is concerned about being too selfish in helping his fellow nobles because if there is a majority of unhappy peasants, that’s revolting. I’ll list 3 priorities from A-C.
Priority A: If there are peasants, I don’t want there to be a majority of them who are unhappy because of nobles, because that would hurt the happiness of nobles.
Priority B: I want there to be more happy nobles.
Priority C: I want there to be more peasants who are happy as well.
Now let me consider your points:
1 Fits. Priority B is more important than Priority C.
2 Fits. If the only way to do Priority B is to do break Priority A, don’t do it.
3 Fits. If you can do Priority C without violating Priority A, then do it.
In essence: When someone is below you in some way: (A peasant, an animal, an other) you may end up helping them not because you want them to be happy, but because you are afraid of what would happen if they are not happy enough. Even if the fear is “But if they aren’t happy enough, I’d feel guilty.” but you don’t dislike them. If you can give them happiness at no cost to you, sure.
Now that I’ve typed that out, I’m fairly sure I’ve acted like that on several different occasions. Which really makes me think I need to reevaluate my ethics. Because I act like that, but I can’t actually justify acting like that now that I’ve noticed myself acting like that.
Although the reason it may be hard to justify is that I have to explain to others who are somewhat distant, “Well, I care about you, but I don’t really CARE care, you know? Because of X, Y, and Z you see. You understand.” That’s not really something that’s polite to say to someone’s face, so if you say in public “Yeah, I don’t really care about everyone who isn’t my friends and family, I just want to be sufficiently nice to all of you that I don’t get hurt out of spite or guilt.” That’s offensive.
And what really bothers me is that if I want to be honest, I do the same things to people that I care about at least some of the time, and they do the same thing to me some of the time to.
You’re right to say that treating actual people that way seems pretty unpleasant. But the examples I gave involved creating new people and animals, not differentially treating existing people and animals in some fashion.
I don’t see refusing to create a new person because of a certain set of priorities as morally equivalent to disrespectfully treating existing persons because of that set of priorities. If you say “I care about you, but I don’t really CARE, you know?” to a person you’ve decided to not create, who are you talking to? Who have you been rude to? Who have you been offensive to? Nobody, that’s who.
I agree with you that we have a duty to treat people with care and respect in real life. That’s precisely why I oppose creating new people in certain circumstances. Because once they’re created there is no turning back, you have to show them that respect and care, and you have to be genuine about it, even if doing so messes up your other priorities. I want to make sure that my duty to others remains compatible with my other priorities. And I don’t see anything wrong with creating slightly less people in order to do so.
Or to put it another way, it seems like I have a switch that, when flipped, makes me go from considering a person in the sort of cynical way you described to considering them a fellow person that I love and care for and respect with all my heart. And what flips that switch from “cynical” to “loving and compassionate” is an affirmative answer to the question “Does this person actually exist, or are they certain to exist in the future?” I don’t see this as a moral failing. Nonexistant people don’t mind if you don’t love or respect them.
I see it as an unfortunate fact of limited resources. Caring about the entire world in enough detail is impossible because each of us only has a few neurons for every other person on Earth. Until we can engineer ourselves for more caring we will have to be satisfied with putting most other people into classes such that we care about the class as a whole and leave individual caring up to other individuals in that class. Being sufficiently nice to any particular individual of a class is probably the most you can rationally do for them barring extenuating evidence. If they are obviously out of the norm for their class (starving, injured, in danger, etc.) and you can immediately help them significantly more efficiently than another member of the class then you should probably spend more caring on them. This avoids the bystander effect by allowing you to only care about overall accidents and injuries in the class in general but also care specifically about an individual if you happen to be in a good position to help.
Otherwise try to maximize the utility of the class as a whole with regard to your ability to efficiently affect their utility, weighted against your other classes and individuals appropriately.