To fill in the gap in 3: There is no unique set of values, but there is a unique process for deriving an optimal set of consistent preferences (up to some kind of isomorphism), though distinct individuals will get different results after carrying out this process.
As opposed to 4, which states that there is some set of processes that can derive consistent preferences but that no claims about which of these processes is best can be substantiated.
And as I said above, Eliezer believes something like 3, but insists on the caveat if we consider only humans, all consistent sets of preferences generated will substantially overlap, and that therefore we can create an FAI whose consistent preferences will entirely overlap that set.
To fill in the gap in 3: There is no unique set of values, but there is a unique process for deriving an optimal set of consistent preferences (up to some kind of isomorphism), though distinct individuals will get different results after carrying out this process.
As opposed to 4, which states that there is some set of processes that can derive consistent preferences but that no claims about which of these processes is best can be substantiated.
And as I said above, Eliezer believes something like 3, but insists on the caveat if we consider only humans, all consistent sets of preferences generated will substantially overlap, and that therefore we can create an FAI whose consistent preferences will entirely overlap that set.