Do you think a logarithmic scale makes more sense than a linear scale?
Assuming that this article is a reaction to “Torture vs. Dust Specks”, the hypothetical number of people suffering from dust specks was specified as 3^^^3, which in practice is an unimaginably large number. Big numbers such as “the number of particles in the entire known universe” are not sufficient even to describe its number of digits. Therefore, using a logarithmic scale changes nothing.
Logarithmic scale with a hard cap is an inelegant solution, comparable to a linear scale with a hard cap.
What you probably want instead is some formula like in the theory of relativity, where the speed of a rocket approaches but never reaches a certain constant c. For example, you might claim that if a badness of any specific thing is X, then the badness of this thing happening even to a practically infinite number of people is still only approaching some finite value C*X. (Not sure if C is constant across different kinds of suffering.)
That seems like a nice justification for scope insensitivity. We are not insensitive, it’s just that saving 2,000 birds or saving 200,000 birds really has approximately the same moral value!
The problem with this justification is what qualifies as the “same kind of suffering”. Suppose that infinite people getting a dust speck in their eyes aggregates into 1000 units of badness. If instead, an infinite number people get a dust speck in their left eyes, and an infinite number of different people get a dust speck in their right eyes, does this aggregate into 1000 or 2000 units of badness, and why? What about dusk specks vs sand specks?
Or is this supposed to aggregate over different kinds of suffering? So even an almost infinite number of people, each one mildly discomforted in a unique way, are a less bad outcome than one person suffering horribly?
...shortly, it is not enough to say “in this specific scenario, I would define the proper way to calculate utility this way”, you should provide a complete theory, and then see how well it works in other scenarios.
(Also, you need to consider practically infinitely small numbers of people—that is, people suffering certain fate with a microscopically tiny probability.)
Assuming that this article is a reaction to “Torture vs. Dust Specks”, the hypothetical number of people suffering from dust specks was specified as 3^^^3, which in practice is an unimaginably large number. Big numbers such as “the number of particles in the entire known universe” are not sufficient even to describe its number of digits. Therefore, using a logarithmic scale changes nothing.
Logarithmic scale with a hard cap is an inelegant solution, comparable to a linear scale with a hard cap.
What you probably want instead is some formula like in the theory of relativity, where the speed of a rocket approaches but never reaches a certain constant c. For example, you might claim that if a badness of any specific thing is X, then the badness of this thing happening even to a practically infinite number of people is still only approaching some finite value C*X. (Not sure if C is constant across different kinds of suffering.)
That seems like a nice justification for scope insensitivity. We are not insensitive, it’s just that saving 2,000 birds or saving 200,000 birds really has approximately the same moral value!
The problem with this justification is what qualifies as the “same kind of suffering”. Suppose that infinite people getting a dust speck in their eyes aggregates into 1000 units of badness. If instead, an infinite number people get a dust speck in their left eyes, and an infinite number of different people get a dust speck in their right eyes, does this aggregate into 1000 or 2000 units of badness, and why? What about dusk specks vs sand specks?
Or is this supposed to aggregate over different kinds of suffering? So even an almost infinite number of people, each one mildly discomforted in a unique way, are a less bad outcome than one person suffering horribly?
...shortly, it is not enough to say “in this specific scenario, I would define the proper way to calculate utility this way”, you should provide a complete theory, and then see how well it works in other scenarios.
(Also, you need to consider practically infinitely small numbers of people—that is, people suffering certain fate with a microscopically tiny probability.)