Is it even possible to have a perfectly aligned AI?
If you teach an AI to model the function f(x) = sin(x), it will only be “aligned” with your goal of computing sin(x) to the point of computational accuracy. You either accept some arithmetic cutoff or the AI turns the universe to computronium in order to better approximate Pi.
If you try to teach an AI something like handwritten digit classification, it’ll come across examples that even a human wouldn’t be able to identify accurately. There is no “truth” to whether a given image is a 6 or a very badly drawn 5, other than the intent of the person who wrote it. The AI’s map can’t really be absolutely correct because the notion of correctness is not unambiguously defined in the territory. Is it a 5 because the person who wrote it intended it to be a 5? What if 75% of humans say it’s a 6?
Since there will always be both computational imprecision and epistemological uncertainty from the territory, the best you can ever do is probably an approximate solution that captures what is important to the degree of confidence we ultimately decide is sufficient.
Is it even possible to have a perfectly aligned AI?
If you teach an AI to model the function f(x) = sin(x), it will only be “aligned” with your goal of computing sin(x) to the point of computational accuracy. You either accept some arithmetic cutoff or the AI turns the universe to computronium in order to better approximate Pi.
If you try to teach an AI something like handwritten digit classification, it’ll come across examples that even a human wouldn’t be able to identify accurately. There is no “truth” to whether a given image is a 6 or a very badly drawn 5, other than the intent of the person who wrote it. The AI’s map can’t really be absolutely correct because the notion of correctness is not unambiguously defined in the territory. Is it a 5 because the person who wrote it intended it to be a 5? What if 75% of humans say it’s a 6?
Since there will always be both computational imprecision and epistemological uncertainty from the territory, the best you can ever do is probably an approximate solution that captures what is important to the degree of confidence we ultimately decide is sufficient.
I edited to clarify what I mean by “approximate value learning”.