This is an impressive failure to respond to what I said, which again was that you asked for an explanation of false data. “Most Friendly AI theorists” do not appear to think that extrapolation will bring all human values into agreement, so I don’t know what “arguments” you refer to or even what you think they seek to establish. Certainly the link above has Eliezer assuming the opposite (at least for the purpose of safety-conscious engineering).
ETA: This is the link to the full sub-thread. Note my response to dxu.
This is an impressive failure to respond to what I said, which again was that you asked for an explanation of false data. “Most Friendly AI theorists” do not appear to think that extrapolation will bring all human values into agreement, so I don’t know what “arguments” you refer to or even what you think they seek to establish. Certainly the link above has Eliezer assuming the opposite (at least for the purpose of safety-conscious engineering).
ETA: This is the link to the full sub-thread. Note my response to dxu.