Fantastic: There is a utility function representing human values (or a procedure for determining such a function) that most people (including people with a broad range of expertise) are happy with.
Pretty good: Everyone’s values are different (and often contradict each other), but there is broad agreement as to how to aggregate preferences. Most people accept FAI needs to respect values of humanity as a whole, not just their own.
Sufficiently good: Many important human values contradict each other, with no “best” solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached.
Agree with your Fantastic but disagree with how you arrange the others… it wouldn’t be rational to favor a solution which satisfies others’ values in larger measure at the expense of one’s own values in smaller measure. If the solution is less than Fantastic, I’d rather see a solution which favors in larger measure the subset of humanity with values more similar to my own, and in smaller measure the subset of humanity whose values are more divergent from my own.
I know, I’m a damn, dirty, no good egoist. But you have to admit that in principle egoism is more rational than altruism.
Another dimension: value discovery.
Fantastic: There is a utility function representing human values (or a procedure for determining such a function) that most people (including people with a broad range of expertise) are happy with.
Pretty good: Everyone’s values are different (and often contradict each other), but there is broad agreement as to how to aggregate preferences. Most people accept FAI needs to respect values of humanity as a whole, not just their own.
Sufficiently good: Many important human values contradict each other, with no “best” solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached.
I’m tempted to add:
*Not So Good: the FAI team, or one team member, takes over the world. (Imagine an Infinite Doom spell done right.)
I would much rather see any single human being’s values take over the future light cone than a paperclip maximizer!
So would I. It’s not so good, but it’s not so bad either.
Agree with your Fantastic but disagree with how you arrange the others… it wouldn’t be rational to favor a solution which satisfies others’ values in larger measure at the expense of one’s own values in smaller measure. If the solution is less than Fantastic, I’d rather see a solution which favors in larger measure the subset of humanity with values more similar to my own, and in smaller measure the subset of humanity whose values are more divergent from my own.
I know, I’m a damn, dirty, no good egoist. But you have to admit that in principle egoism is more rational than altruism.
OK—I wasn’t too sure about how these ones should be worded.