I do not see a contradiction in claiming that a) utility monsters do not exist and b) under utilitarianism, it is correct to kill an arbitrarily large number of nematodes to save one human.
The solution to this issue is to reject the idea of a continuous scale of “utility capability”, under which nematodes can feel a tiny amount of utility, humans can feel a moderate amount, and some superhuman utility monster can feel a tremendous amount. Rather, we can (and, I believe, should) reduce it to two classes: agents and objects.
An agent, such as a human or a utility monster, is a creature which is sentient and judged by society to be worthy of moral consideration, including it in the social utility function. All agents are considered equal, with their individual utility units converted to some social standard. For example, Agent Alpha receives 100 Alpha-Utils from the average day, where Agent Beta receives 200 Beta-Utils from the average day. Both of these are converted into Society-Utils—let’s say 10 Society-Utils—making an exchange rate of 10 Alpha:Society and 20 Beta:Society.
This is similar to how currency is exchanged. Assuming some reference point, perhaps an event which society deems is equally valuable for all agents (that is, society values it equally regardless of which agent experiences it), there exists a Utility Economy, in which there exists a comparative advantage; Agent Alpha and Agent Beta serve each other, producing more Society-Utils by trading than either could alone.
Left to the side are objects, such as nematodes. Objects are of value only to the extent to which they feature in an agent’s utility function; for the purpose of ethical consideration, we consider objects to have no utility function. Therefore, it would be proper to kill nematodes to save humans—unless the side effects from killing so many nematodes began to threaten more humans than it would save. Similarly, animal protection laws would exist not because of any right of the animal, but rather the strong preferences of humans to avoid animal cruelty. This is consistent with the coexistence of factory farming and animal cruelty laws; humans don’t much care about cows, but will fight to defend their pets (and creatures like them).
Of course, to some extent this is passing the buck to the “Utility Economy” to set fair rates, but I believe that a society could cobble together a reasonable exchange in which, for example, nobody’s life would be valued trivially.
If I contract a neurodegenerative illness, which will gradually reduce my cognitive function, until I end up in a vegetative state, do I retain agent-ness throughout, or at some point lose equal footing with healthy me in one go? Neither seems a good description of my slow slide from fully human to vegetable.
with their individual utility units converted to some social standard. For example, Agent Alpha receives 100 Alpha-Utils from the average day, where Agent Beta receives 200 Beta-Utils from the average day. Both of these are converted into Society-Utils—let’s say 10 Society-Utils—making an exchange rate of 10 Alpha:Society and 20 Beta:Society.
What is an “average day”? My average day probably has greater utility than that of a captive of a sadistic gang...
This is a particular instance of the general approach, “I have to assign a number to each of these items, but it’s hard and contentious to do, so instead I will give them all zeroes (objects) or ones (agents).” It always increases the total error. The world is not divided into agents and objects, and this approach would still increase total error, or at best leave it unchanged, even if they were, since errors in classification give a larger total error when they are thresholded instead of just left as, say, probabilities.
You should also consider that, when AI is developed, you will become an “object”.
This approach doesn’t work well even for humans. Very intelligent humans, armed with and experienced with mathematics, large computerized databases, regression analysis, probability and statistics, information theory, dimension reduction, data mining, machine learning, stability analysis, optimization techniques, and a good background in cog sci, biology, & physics, think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs. So where do you draw the line?
Very intelligent humans, … think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs.
Reference? The priors for recent humans and dogs thinking more alike than modern humans and dogs (despite Euarchontoglires and Laurasiatheria diverging about 90 MYA and humans in 1600 diverging from modern humans 400 years ago). I might estimate at 90M:400 against, if I had to do so very quickly.
Why do you think that more than half of the change in thinking in the last 90 million years has occurred in the last 400?
Obviously I cannot cite a reference. This is an opinion. I take it you think less than half of the sum total of what has been discovered or learned was learned in the past 400 years? Your priors suggest you assume linear advance in thinking, but hominid cranial enlargement began only 1-2 million years ago. So you must also expect, as a prior, that the difference between humans and chimps is 1/90th − 1/45th of the difference between chimps and dogs. In that case, why exclude chimps from our society?
The maximum travel speed of humans today have travelled is about 7 miles per second. Assuming a travel rate of 0 miles per second 4 billion years ago, we do not conclude that bacteria were able to propel themselves 3.5 miles per second 2 billion years ago.
I don’t really think there’s been a change in humans. I think there are new tools available that help us think better, much like the new machines available that let us move fast.
You don’t believe that homind cranial enlargement is responsible for more than half of the difference between modern humans and dogs, so why does it matter when it happened?
Suppose that dogs are 50-100 times further away from humans than chimps are. Further suppose that bacteria are more than 100 times further away from humans than dogs are. Why is one of those a reason to include chimps, and the other not a reason to include dogs. (Rocks are more than 100 times as different from humans than fungi are, right?) Rather than use relative closeness, I’m going to assert that absolute distance is important. (If that means that a typical human 400 years ago would not qualify now, I think it says more about them than it does about me; but I don’t think that is the case).
I also danced around and didn’t actually say that 90M:400 was the best prior; I said if I needed one quickly it’s the one I would use. To refine that number first requires refining the question.
That is it an implementation of equality among unequal agents. Why is an average day of Agent Alpha the same value as an average day of Agent Beta, and how does Agent Beta determine how much utility Agent Alpha gains from something other than the reference economy?
If we allow the agents to determine their own utility derived from, say, fiat currency, we have instead of a utility economy a financial economy. Everyone gains instrumental value from participating (or they stop participating). Allow precommitment and assume rational, well-informed agents, and the economic system maximizes each individual utility within the possible parameters.
This seems like a pretty sensible account to me. (Does anyone see any obvious flaws?)
This is similar to how currency is exchanged. Assuming some reference point, perhaps an event which society deems is equally valuable for all agents (that is, society values it equally regardless of which agent experiences it), there exists a Utility Economy, in which there exists a comparative advantage; Agent Alpha and Agent Beta serve each other, producing more Society-Utils by trading than either could alone.
Could you explain this a bit more? I’m not sure I understand. (FYI, I know almost nothing about currency exchange.)
I don’t know much about it either, but the basic principles I’m trying to transfer are:
a) N different nations have N different currencies. Agents 1 through N have Agent1-Utils, Agent2-Utils...AgentN-Utils.
b) They are able to interact in an international market by setting an exchange rate between their currencies. In this case, we propose the extra step of creating a single societal currency, which would be analogous to a “World Dollar”, so that we need only N different conversions (Agent i to Society, i = 1..N) rather than N(N-1)/2 (Agent i to Agent j, i = 1..N, j = i+1..N), and the responsibility to set conversion rates is a societal, rather than individual, responsibility.
Admittedly, this analogy has its own “utility monster”—a nation which is economically powerful enough to manipulate exchange rates. However, that doesn’t quite exist in the “Utility Economy” unless one agent is powerful enough to bend society to their whim, in which case it’s not so much a utilitarian society as a dictatorship.
I do not see a contradiction in claiming that a) utility monsters do not exist and b) under utilitarianism, it is correct to kill an arbitrarily large number of nematodes to save one human.
The solution to this issue is to reject the idea of a continuous scale of “utility capability”, under which nematodes can feel a tiny amount of utility, humans can feel a moderate amount, and some superhuman utility monster can feel a tremendous amount. Rather, we can (and, I believe, should) reduce it to two classes: agents and objects.
An agent, such as a human or a utility monster, is a creature which is sentient and judged by society to be worthy of moral consideration, including it in the social utility function. All agents are considered equal, with their individual utility units converted to some social standard. For example, Agent Alpha receives 100 Alpha-Utils from the average day, where Agent Beta receives 200 Beta-Utils from the average day. Both of these are converted into Society-Utils—let’s say 10 Society-Utils—making an exchange rate of 10 Alpha:Society and 20 Beta:Society.
This is similar to how currency is exchanged. Assuming some reference point, perhaps an event which society deems is equally valuable for all agents (that is, society values it equally regardless of which agent experiences it), there exists a Utility Economy, in which there exists a comparative advantage; Agent Alpha and Agent Beta serve each other, producing more Society-Utils by trading than either could alone.
Left to the side are objects, such as nematodes. Objects are of value only to the extent to which they feature in an agent’s utility function; for the purpose of ethical consideration, we consider objects to have no utility function. Therefore, it would be proper to kill nematodes to save humans—unless the side effects from killing so many nematodes began to threaten more humans than it would save. Similarly, animal protection laws would exist not because of any right of the animal, but rather the strong preferences of humans to avoid animal cruelty. This is consistent with the coexistence of factory farming and animal cruelty laws; humans don’t much care about cows, but will fight to defend their pets (and creatures like them).
Of course, to some extent this is passing the buck to the “Utility Economy” to set fair rates, but I believe that a society could cobble together a reasonable exchange in which, for example, nobody’s life would be valued trivially.
If I contract a neurodegenerative illness, which will gradually reduce my cognitive function, until I end up in a vegetative state, do I retain agent-ness throughout, or at some point lose equal footing with healthy me in one go? Neither seems a good description of my slow slide from fully human to vegetable.
What is an “average day”? My average day probably has greater utility than that of a captive of a sadistic gang...
That looks like a great foundation for a set of laws, but a poor foundation for a set of ethics.
How so? I view this as an implementation of equality among agents. What makes it ethically repugnant?
This is a particular instance of the general approach, “I have to assign a number to each of these items, but it’s hard and contentious to do, so instead I will give them all zeroes (objects) or ones (agents).” It always increases the total error. The world is not divided into agents and objects, and this approach would still increase total error, or at best leave it unchanged, even if they were, since errors in classification give a larger total error when they are thresholded instead of just left as, say, probabilities.
You should also consider that, when AI is developed, you will become an “object”.
This approach doesn’t work well even for humans. Very intelligent humans, armed with and experienced with mathematics, large computerized databases, regression analysis, probability and statistics, information theory, dimension reduction, data mining, machine learning, stability analysis, optimization techniques, and a good background in cog sci, biology, & physics, think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs. So where do you draw the line?
Reference? The priors for recent humans and dogs thinking more alike than modern humans and dogs (despite Euarchontoglires and Laurasiatheria diverging about 90 MYA and humans in 1600 diverging from modern humans 400 years ago). I might estimate at 90M:400 against, if I had to do so very quickly.
Why do you think that more than half of the change in thinking in the last 90 million years has occurred in the last 400?
Obviously I cannot cite a reference. This is an opinion. I take it you think less than half of the sum total of what has been discovered or learned was learned in the past 400 years? Your priors suggest you assume linear advance in thinking, but hominid cranial enlargement began only 1-2 million years ago. So you must also expect, as a prior, that the difference between humans and chimps is 1/90th − 1/45th of the difference between chimps and dogs. In that case, why exclude chimps from our society?
The maximum travel speed of humans today have travelled is about 7 miles per second. Assuming a travel rate of 0 miles per second 4 billion years ago, we do not conclude that bacteria were able to propel themselves 3.5 miles per second 2 billion years ago.
I don’t really think there’s been a change in humans. I think there are new tools available that help us think better, much like the new machines available that let us move fast.
You don’t believe that homind cranial enlargement is responsible for more than half of the difference between modern humans and dogs, so why does it matter when it happened?
Suppose that dogs are 50-100 times further away from humans than chimps are. Further suppose that bacteria are more than 100 times further away from humans than dogs are. Why is one of those a reason to include chimps, and the other not a reason to include dogs. (Rocks are more than 100 times as different from humans than fungi are, right?) Rather than use relative closeness, I’m going to assert that absolute distance is important. (If that means that a typical human 400 years ago would not qualify now, I think it says more about them than it does about me; but I don’t think that is the case).
I also danced around and didn’t actually say that 90M:400 was the best prior; I said if I needed one quickly it’s the one I would use. To refine that number first requires refining the question.
That is it an implementation of equality among unequal agents. Why is an average day of Agent Alpha the same value as an average day of Agent Beta, and how does Agent Beta determine how much utility Agent Alpha gains from something other than the reference economy?
If we allow the agents to determine their own utility derived from, say, fiat currency, we have instead of a utility economy a financial economy. Everyone gains instrumental value from participating (or they stop participating). Allow precommitment and assume rational, well-informed agents, and the economic system maximizes each individual utility within the possible parameters.
This seems like a pretty sensible account to me. (Does anyone see any obvious flaws?)
Could you explain this a bit more? I’m not sure I understand. (FYI, I know almost nothing about currency exchange.)
I don’t know much about it either, but the basic principles I’m trying to transfer are:
a) N different nations have N different currencies. Agents 1 through N have Agent1-Utils, Agent2-Utils...AgentN-Utils.
b) They are able to interact in an international market by setting an exchange rate between their currencies. In this case, we propose the extra step of creating a single societal currency, which would be analogous to a “World Dollar”, so that we need only N different conversions (Agent i to Society, i = 1..N) rather than N(N-1)/2 (Agent i to Agent j, i = 1..N, j = i+1..N), and the responsibility to set conversion rates is a societal, rather than individual, responsibility.
Admittedly, this analogy has its own “utility monster”—a nation which is economically powerful enough to manipulate exchange rates. However, that doesn’t quite exist in the “Utility Economy” unless one agent is powerful enough to bend society to their whim, in which case it’s not so much a utilitarian society as a dictatorship.