Eliezer’s writings about FAI and CEV, and most discussion about them here, assume that the psychological unity of mankind is great enough that you can build one FAI that tries to optimize human experience WRT one value system, and this will be (in some sense that I don’t understand) the “right thing to do”.
I don’t see how that makes a difference WRT the required degree of psychological unity. Just talking about “idealized human preference” assumes either psychological unity, or moral realism.
Great post. However I would contend that psychological unity of mankind seems more like a minority belief on LW.
Eliezer’s writings about FAI and CEV, and most discussion about them here, assume that the psychological unity of mankind is great enough that you can build one FAI that tries to optimize human experience WRT one value system, and this will be (in some sense that I don’t understand) the “right thing to do”.
For idealized human preference.
I don’t see how that makes a difference WRT the required degree of psychological unity. Just talking about “idealized human preference” assumes either psychological unity, or moral realism.