I would even precommit to feeling bad with at least negative utility (dustspeckintheeye—epsilon) if any rational agent could conceivably choose torture over dust specks. If I had to choose whether to torture a human for 50 years or give 3^^^3 people dust specks I would calculate the total utility with torture as lower than with dust specks because I would project my morality onto those 3^^^3 people, all of whom would then be slightly sadder if I were to choose torture than if I were to choose dust specks.
If your objective is to make it right to choose 3^^^3 dust specks over torture, this doesn’t work. If you count the negative utility of one person knowing that another is suffering, you should still choose torture over 3^^^3 dust specks, because you also have to count the negative utility of 3^^^3 people knowing about 3^^^3 dust specks.
Specifically, suppose the universe contains N+1 people, and we can either torture one person or give N people dust specks. If the disutility of the sadness of one person knowing that another is being tortured is x, then, torturing one person has disutility Nx (plus a constant from the bare fact that a person is being tortured, which is irrelevant if N is large enough). Meanwhile, if the disutility of the sadness of one person knowing that another person has a dust speck in their eye is y, then giving one person a dust speck has disutility Ny, and giving N people dust specks has disutility N^2 y (plus an order-N quantity from the bare fact that N people get dust specks, which is irrelevant if N is large enough). No matter what x and y are, N^2 y > Nx if N is large enough. 3^^^3 is gigantically huge, and thus certainly large enough. Thus if N is large enough, the disutility of 3^^^3 dust specks is greater than that of torture.
To get around this you could try to make x infinite (this seems to be something you are dancing around) but if x is really infinite then you should be doing nothing in your life that is not devoted to preventing torture, or even the slightest possibility that someone might someday be tortured. Everything nonessential should be sacrificed to this end, because a literal infinity of utility is on the line. Or you could say that y = 0, if you literally couldn’t care less whether other people get dust specks, in which case you should substitute in place of dust specks some other minor discomfort that you would rather spare other people from experiencing.
Maybe a better attempt is to argue that one person’s sadness about other people’s dust specks does not increase linearly with the number of dust specks. If there is an upper limit to how sad one person can feel about N dust specks that cannot be exceeded no matter how large N is, then your utility function may recommend dust specks over torture. But I suspect this position has problems. I am going to think about it some.
If that is not a rational way to calculate the utility functions of other beings then why shouldn’t I choose to torture a human for 50 years to maximize the utility of 3^^^3 paperclip maximizers who might be distracted from perfectly producing paperclips by a dust speck?
You can rationally give dust specks to 3^^^3 paperclip maximizers instead of torturing one human if your disutility for giving one paperclip maximizer a dust speck is less than or equal to zero. This seems quite compatible with standard human morality, especially if these paperclip maximizers inspire zero empathy or if their paperclip-maximizing is dangerous to humans. But if you would feel even the slightest bit of remorse about giving a paperclip maximizer a dust speck, torture one human instead of multiplying that remorse by a stupefyingly huge number like 3^^^3.
No matter what x and y are, N^2 y > Nx if N is large enough. 3^^^3 is gigantically huge, and thus certainly large enough. Thus if N is large enough, the disutility of 3^^^3 dust specks is greater than that of torture.
Am I not allowed to set my disutility for knowing that another person gets a dust speck to zero? I admit that before I thought about this problem I had no opinion about other people getting dust specks in their eyes or not, but I had a strong opinion against torture. Additionally, even now I would say that the margin for error on my estimation of the disutility of a dust speck in someone else’s eye is far greater in magnitude than my estimation of the disutility itself, and therefore zero may be a reasonable choice to avoid unintuitive conclusions. Maybe occasional dust specks would help all those folks staring at computer screens and forgetting to blink and this would outweigh any annoyance for the rest of humanity, for instance. There’s also the next level of evaluation: z, the utility of knowing that another person would willingly receive a dust speck in their eye to collectively prevent a single person from being tortured. If that utility outweighs the disutility of knowing that another person received a dust speck in their eye then N^2 z outweighs both of the others. I think I would be happier knowing I was in a society that completely agreed with me about not torturing one person to prevent dust specks from getting in their eyes than I would be if no one received that particular dust speck in their eye. Even if only half of the population thought that way the ratio between (N/2)^2 y and (N/2)^2 z would be a constant factor and would still dominate Nx. This may create a paradox because now the utility of getting a dust speck in the eye is zero or positive in the world of N individuals. It may not be a paradox if the utility only becomes non-negative as a condition of dealing with specific ethical meta-questions, and is already playing with the limits of another sort of insanity; If agents are concerned about every possible dilemma of this nature they may precommit to taking on a lot (potentially unbounded) of disutility without the actual existence of a situation requiring them to experience that disutility to save a real person from torture.
Maybe a better attempt is to argue that one person’s sadness about other people’s dust specks does not increase linearly with the number of dust specks. If there is an upper limit to how sad one person can feel about N dust specks that cannot be exceeded no matter how large N is, then your utility function may recommend dust specks over torture. But I suspect this position has problems. I am going to think about it some.
The marginal utility of goods would suggest that marginal disutility exists too. Once everyone experiences something it becomes much more normal than if only one person suffers it. This is, of course, also the reason that not many people are adamant about preventing death from old age and so it’s not necessarily good reasoning in general. Equalizing disutility may sound fair but it is probably not always right.
You can rationally give dust specks to 3^^^3 paperclip maximizers instead of torturing one human if your disutility for giving one paperclip maximizer a dust speck is less than or equal to zero. This seems quite compatible with standard human morality, especially if these paperclip maximizers inspire zero empathy or if their paperclip-maximizing is dangerous to humans. But if you would feel even the slightest bit of remorse about giving a paperclip maximizer a dust speck, torture one human instead of multiplying that remorse by a stupefyingly huge number like 3^^^3.
I think you are correct, and it becomes more apparent if you just form the question as a limit. As N tends toward infinity, do you apply -x utility to N individuals or apply -y utility to 1 individual? The only way to ever choose -x utility to N individuals is if you weight all but a finite number of the N individuals’ utility at zero. This may mean we have to weight every individual who doesn’t share our exact morality at zero to avoid making our own immoral decisions.
If your objective is to make it right to choose 3^^^3 dust specks over torture, this doesn’t work. If you count the negative utility of one person knowing that another is suffering, you should still choose torture over 3^^^3 dust specks, because you also have to count the negative utility of 3^^^3 people knowing about 3^^^3 dust specks.
Specifically, suppose the universe contains N+1 people, and we can either torture one person or give N people dust specks. If the disutility of the sadness of one person knowing that another is being tortured is x, then, torturing one person has disutility Nx (plus a constant from the bare fact that a person is being tortured, which is irrelevant if N is large enough). Meanwhile, if the disutility of the sadness of one person knowing that another person has a dust speck in their eye is y, then giving one person a dust speck has disutility Ny, and giving N people dust specks has disutility N^2 y (plus an order-N quantity from the bare fact that N people get dust specks, which is irrelevant if N is large enough). No matter what x and y are, N^2 y > Nx if N is large enough. 3^^^3 is gigantically huge, and thus certainly large enough. Thus if N is large enough, the disutility of 3^^^3 dust specks is greater than that of torture.
To get around this you could try to make x infinite (this seems to be something you are dancing around) but if x is really infinite then you should be doing nothing in your life that is not devoted to preventing torture, or even the slightest possibility that someone might someday be tortured. Everything nonessential should be sacrificed to this end, because a literal infinity of utility is on the line. Or you could say that y = 0, if you literally couldn’t care less whether other people get dust specks, in which case you should substitute in place of dust specks some other minor discomfort that you would rather spare other people from experiencing.
Maybe a better attempt is to argue that one person’s sadness about other people’s dust specks does not increase linearly with the number of dust specks. If there is an upper limit to how sad one person can feel about N dust specks that cannot be exceeded no matter how large N is, then your utility function may recommend dust specks over torture. But I suspect this position has problems. I am going to think about it some.
You can rationally give dust specks to 3^^^3 paperclip maximizers instead of torturing one human if your disutility for giving one paperclip maximizer a dust speck is less than or equal to zero. This seems quite compatible with standard human morality, especially if these paperclip maximizers inspire zero empathy or if their paperclip-maximizing is dangerous to humans. But if you would feel even the slightest bit of remorse about giving a paperclip maximizer a dust speck, torture one human instead of multiplying that remorse by a stupefyingly huge number like 3^^^3.
Am I not allowed to set my disutility for knowing that another person gets a dust speck to zero? I admit that before I thought about this problem I had no opinion about other people getting dust specks in their eyes or not, but I had a strong opinion against torture. Additionally, even now I would say that the margin for error on my estimation of the disutility of a dust speck in someone else’s eye is far greater in magnitude than my estimation of the disutility itself, and therefore zero may be a reasonable choice to avoid unintuitive conclusions. Maybe occasional dust specks would help all those folks staring at computer screens and forgetting to blink and this would outweigh any annoyance for the rest of humanity, for instance. There’s also the next level of evaluation: z, the utility of knowing that another person would willingly receive a dust speck in their eye to collectively prevent a single person from being tortured. If that utility outweighs the disutility of knowing that another person received a dust speck in their eye then N^2 z outweighs both of the others. I think I would be happier knowing I was in a society that completely agreed with me about not torturing one person to prevent dust specks from getting in their eyes than I would be if no one received that particular dust speck in their eye. Even if only half of the population thought that way the ratio between (N/2)^2 y and (N/2)^2 z would be a constant factor and would still dominate Nx. This may create a paradox because now the utility of getting a dust speck in the eye is zero or positive in the world of N individuals. It may not be a paradox if the utility only becomes non-negative as a condition of dealing with specific ethical meta-questions, and is already playing with the limits of another sort of insanity; If agents are concerned about every possible dilemma of this nature they may precommit to taking on a lot (potentially unbounded) of disutility without the actual existence of a situation requiring them to experience that disutility to save a real person from torture.
The marginal utility of goods would suggest that marginal disutility exists too. Once everyone experiences something it becomes much more normal than if only one person suffers it. This is, of course, also the reason that not many people are adamant about preventing death from old age and so it’s not necessarily good reasoning in general. Equalizing disutility may sound fair but it is probably not always right.
I think you are correct, and it becomes more apparent if you just form the question as a limit. As N tends toward infinity, do you apply -x utility to N individuals or apply -y utility to 1 individual? The only way to ever choose -x utility to N individuals is if you weight all but a finite number of the N individuals’ utility at zero. This may mean we have to weight every individual who doesn’t share our exact morality at zero to avoid making our own immoral decisions.