Ah, so you mean that humans are not perfectly aligned with each other? I was going by the definition of “aligned” in Eliezer’s “AGI ruin” post, which was
I am not talking about ideal or perfect goals of ‘provable’ alignment, nor total alignment of superintelligences on exact human values, nor getting AIs to produce satisfactory arguments about moral dilemmas which sorta-reasonable humans disagree about, nor attaining an absolute certainty of an AI not killing everyone. When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, “please don’t disassemble literally everyone with probability roughly 1” is an overly large ask that we are not on course to get.
Likewise, in an earlier paper I mentioned that by an AGI that “respects human values”, we don’t mean to imply that current human values would be ideal or static. We just mean that we hope to at least figure out how to build an AGI that does not, say, destroy all of humanity, cause vast amounts of unnecessary suffering, or forcibly reprogram everyone’s brains according to its own wishes.
A lot of discussion about alignment takes this as the minimum goal. Figuring out what to do with humans having differing values and beliefs would be great, but if we could even get the AGI to not get us into outcomes that the vast majority of humans would agree are horrible, that’d be enormously better than the opposite. And there do seem to exist humans who are aligned in this sense of “would not do things that the vast majority of other humans would find horrible, if put in control of the whole world”; even if some would, the fact that some wouldn’t suggests that it’s also possible for some AIs not to do it.
Ah, so you mean that humans are not perfectly aligned with each other? I was going by the definition of “aligned” in Eliezer’s “AGI ruin” post, which was
Likewise, in an earlier paper I mentioned that by an AGI that “respects human values”, we don’t mean to imply that current human values would be ideal or static. We just mean that we hope to at least figure out how to build an AGI that does not, say, destroy all of humanity, cause vast amounts of unnecessary suffering, or forcibly reprogram everyone’s brains according to its own wishes.
A lot of discussion about alignment takes this as the minimum goal. Figuring out what to do with humans having differing values and beliefs would be great, but if we could even get the AGI to not get us into outcomes that the vast majority of humans would agree are horrible, that’d be enormously better than the opposite. And there do seem to exist humans who are aligned in this sense of “would not do things that the vast majority of other humans would find horrible, if put in control of the whole world”; even if some would, the fact that some wouldn’t suggests that it’s also possible for some AIs not to do it.