The problem of value aggregation has at least one obvious lower bound: divide the universe on equal parts, and have each part optimized to given person’s preference, including game-theoretic trade between the parts to take into account preferences of each of the parts for the structure of the other parts. Even if values of each person have little in common, this would be a great improvement over status quo.
Even if values of each person have little in common, this would be a great improvement over status quo.
This doesn’t seem to necessarily be the case for an altruist if selfish bastards are sufficiently more common than altruists who subjunctively punish selfish bastards. (Though if I recall correctly, you’re skeptical that that sort of divergence is plausible, right?)
The more negative-sum players could be worse off if their targets become better off as a result of the change. Assuming that punishment is the backbone of these players’ preference, and boost in power to do stuff with the allotted matter doesn’t compensate the negative effect of their intended victims having a better life. I don’t believe any human is like that.
I’m not skeptical about divergence per se, of course preferences of different people are going to be very different. I’m skeptical about distinctly unusual aspects being present in any given person’s formal preference, when that person professes that alleged unusual aspect of their preference. That is, my position is that the divergence within human universals is something inevitable, but divergence from the human universals is almost impossible.
This lower-bound could have some use; but my view is that, in the future, most people will be elements of bigger people, making this division difficult.
Existing societies are constructed in a way so that optimizing each person’s preference can help optimize the society’s preference. So maybe it’s possible.
my view is that, in the future, most people will be elements of bigger people, making this division difficult.
I think the idea is to divide between current people(’s individual extrapolated goal systems) once and for all time, in which case this poses no problem as long as personal identity isn’t significantly blurred between now and FAI.
The problem of value aggregation has at least one obvious lower bound: divide the universe on equal parts, and have each part optimized to given person’s preference, including game-theoretic trade between the parts to take into account preferences of each of the parts for the structure of the other parts. Even if values of each person have little in common, this would be a great improvement over status quo.
Good point, but
This doesn’t seem to necessarily be the case for an altruist if selfish bastards are sufficiently more common than altruists who subjunctively punish selfish bastards. (Though if I recall correctly, you’re skeptical that that sort of divergence is plausible, right?)
The more negative-sum players could be worse off if their targets become better off as a result of the change. Assuming that punishment is the backbone of these players’ preference, and boost in power to do stuff with the allotted matter doesn’t compensate the negative effect of their intended victims having a better life. I don’t believe any human is like that.
I’m not skeptical about divergence per se, of course preferences of different people are going to be very different. I’m skeptical about distinctly unusual aspects being present in any given person’s formal preference, when that person professes that alleged unusual aspect of their preference. That is, my position is that the divergence within human universals is something inevitable, but divergence from the human universals is almost impossible.
This lower-bound could have some use; but my view is that, in the future, most people will be elements of bigger people, making this division difficult.
Existing societies are constructed in a way so that optimizing each person’s preference can help optimize the society’s preference. So maybe it’s possible.
I think the idea is to divide between current people(’s individual extrapolated goal systems) once and for all time, in which case this poses no problem as long as personal identity isn’t significantly blurred between now and FAI.