One can’t really equate risking a life with outright killing.
Even if you can cleanly distinguish them for a human, what’s the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)
If we want any system that is aligned with human morality we just can’t make decision based on the desirability of the outcome. For example: “Is it right to kill a healthy person to give its organs to five terminally ill patients and therefore save five lives at a cost of one.” Our sense says killing an innocent bystander as immoral, even if it saves more lives. (See http://www.justiceharvard.org/)
Er, doesn’t that just mean human morality assigns low desirability to the outcome innocent bystander killed to use organs? (That is, if that actually is a pure terminal value—it seems to me that this intuition reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.)
If we want a system to be well-defined, reflectively consistent, and stable under omniscience and omnipotence, expected-utility consequentialism looks like the way to go. Fortunately, it’s pretty flexible.
Even if you can cleanly distinguish them for a human, what’s the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)
To me, “omniscience” and “omnipotence” seem to be self-contradictory notions. Therefore, I consider it a waste of time to think about beings with such attributes.
reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.
OK. Do you think that if someone (e.g. an AI) kills random people for positive overall effect but manages to convince the public that they were random accidents (and therefore public trust is maintained), then it is a morally acceptable option?
Er, doesn’t that just mean human morality assigns low desirability to the outcome innocent bystander killed to use organs?
That’s why I put “I am unsure how you define utilitarism”. If you just evaluate the outcome, then you see f(1 dead)+f(5 alive). If you evaluate the whole process, you see “f(1 guy killed as an innocent bystander) + f(5 alive)”, which may have a much lower desirability due to morality impact.
The same consideration applies to the OP: If you only evaluate the final outcome: you may think that killing hard to satisfy people is a good thing. However if you add the morality penalty of killing innocent people, then the equation suddenly changes.
The question of 1/multi-dimensional objective remains: the extreme liberal moralism would say that it is not allowed to take one dollar from a person, even if it could pay for saving one life, or killing one innocent bystander is wrong even if it could save billion lifes. Just because our agents are autonomous entities and they have unalienable rights to life, property, freedom, that can’t be violated, even for the greater good.
The above problems can only be solved if the moral agents voluntarily opt into a system that takes away a portion of their individual freedom for a greater good. However this system should not give arbitrary power to a single entity but every (immoral) violation of autonomy should happen for a well defined “higher” purpose.
I don’t say that this is the definitive way to address morality abstractly in the presence of a superintelligent entity, these are just reiterations of some of the moral principles our liberal western democracy are built upon.
Even if you can cleanly distinguish them for a human, what’s the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)
Er, doesn’t that just mean human morality assigns low desirability to the outcome innocent bystander killed to use organs? (That is, if that actually is a pure terminal value—it seems to me that this intuition reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.)
If we want a system to be well-defined, reflectively consistent, and stable under omniscience and omnipotence, expected-utility consequentialism looks like the way to go. Fortunately, it’s pretty flexible.
To me, “omniscience” and “omnipotence” seem to be self-contradictory notions. Therefore, I consider it a waste of time to think about beings with such attributes.
OK. Do you think that if someone (e.g. an AI) kills random people for positive overall effect but manages to convince the public that they were random accidents (and therefore public trust is maintained), then it is a morally acceptable option?
That’s why I put “I am unsure how you define utilitarism”. If you just evaluate the outcome, then you see f(1 dead)+f(5 alive). If you evaluate the whole process, you see “f(1 guy killed as an innocent bystander) + f(5 alive)”, which may have a much lower desirability due to morality impact.
The same consideration applies to the OP: If you only evaluate the final outcome: you may think that killing hard to satisfy people is a good thing. However if you add the morality penalty of killing innocent people, then the equation suddenly changes.
The question of 1/multi-dimensional objective remains: the extreme liberal moralism would say that it is not allowed to take one dollar from a person, even if it could pay for saving one life, or killing one innocent bystander is wrong even if it could save billion lifes. Just because our agents are autonomous entities and they have unalienable rights to life, property, freedom, that can’t be violated, even for the greater good.
The above problems can only be solved if the moral agents voluntarily opt into a system that takes away a portion of their individual freedom for a greater good. However this system should not give arbitrary power to a single entity but every (immoral) violation of autonomy should happen for a well defined “higher” purpose.
I don’t say that this is the definitive way to address morality abstractly in the presence of a superintelligent entity, these are just reiterations of some of the moral principles our liberal western democracy are built upon.