Stephen: “the issue isn’t whether it could determine what humans want, but whether it would care.”
That is certainly an issue, but I think in this post and in Magical Categories, EY is leaving that aside for the moment, and simply focussing on whether we can hope to communicate what we want to the AI in the first place.
It seems to me that today’s computers are 100% literal and naive, and EY imagines a superintelligent computer still retaining that property, but would it?
Stephen: “the issue isn’t whether it could determine what humans want, but whether it would care.”
That is certainly an issue, but I think in this post and in Magical Categories, EY is leaving that aside for the moment, and simply focussing on whether we can hope to communicate what we want to the AI in the first place.
It seems to me that today’s computers are 100% literal and naive, and EY imagines a superintelligent computer still retaining that property, but would it?