But this does not constitute a disagreement between them and humans about what is right, any more than humans, in scattering a heap of 3 pebbles, are disagreeing with the Pebblesorters about which numbers are prime!
Checking my understanding: The idea is that we can’t disagree about “rightness” with pebblesorters, even if we both say “right”, because the referant of the word is different, so we’re not really talking about the same thing. Wheras with other humans the referant overlaps to a (large?) extent, so we can disagree about it to the extent that the referant overlaps.
(and our own map of that referant is both inaccurate and inconsistent between people, which is why there is disagreement about the overlapping portion)
Without having read further than this in the Sequences, I’m going to guess (assign X% probability?) that this comes back in future posts about AI, and that a large part of the FAI problem is “how to ensure the AI contains or relies on an accurate map of the referant of ‘right’, when we don’t have such a map ourselves.”
Checking my understanding: The idea is that we can’t disagree about “rightness” with pebblesorters, even if we both say “right”, because the referant of the word is different, so we’re not really talking about the same thing. Wheras with other humans the referant overlaps to a (large?) extent, so we can disagree about it to the extent that the referant overlaps.
Checking my understanding: The idea is that we can’t disagree about “rightness” with pebblesorters, even if we both say “right”, because the referant of the word is different, so we’re not really talking about the same thing. Wheras with other humans the referant overlaps to a (large?) extent, so we can disagree about it to the extent that the referant overlaps.
(and our own map of that referant is both inaccurate and inconsistent between people, which is why there is disagreement about the overlapping portion)
Without having read further than this in the Sequences, I’m going to guess (assign X% probability?) that this comes back in future posts about AI, and that a large part of the FAI problem is “how to ensure the AI contains or relies on an accurate map of the referant of ‘right’, when we don’t have such a map ourselves.”
Yep