I see a lot of posts go by here on AI alignment, agent foundations, and so on, and I’ve seen various papers from MIRI or on arXiv. I don’t follow the subject in any depth, but I am noticing a striking disconnect between the concepts appearing in those discussions and recent advances in AI, especially GPT-3.
People talk a lot about an AI’s goals, its utility function, its capability to be deceptive, its ability to simulate you so it can get out of a box, ways of motivating it to be benign, Tool AI, Oracle AI, and so on. Some of that is just speculative talk, but there does appear to be real mathematics going on, for example on embedded agency. But when I look at GPT-3, even though this is already an AI that Eliezer finds alarming, I see none of these things. GPT-3 is a huge model, trained on huge data, for predicting text. That is not to say that it cannot be understood in cognitive terms, but I see no reason to expect it to be. It is at least something that would have to be demonstrated before any of the formalised work on AI safety would be relevant.
People speculate that bigger and better versions of GPT-like systems may give us some level of real AGI. Can systems of this sort be interpreted as having goals, intentions, or any of the other cognitive and logical concepts that the AI discussions are predicated on?