Thanks for the overview and for the link to that old LiveJournal post by Scott Alexander!
So, Ilya is no longer talking to us, and Sam is talking, but “does not tell us the whole truth”, to say the least.
Yet, I think this interview indicates that Sam’s thinking is evolving. His earlier thinking has been closer to the old OpenAI “official superalignment position”, namely to aim to steer and control superintelligent AI systems, which should be thought of as (very powerful) tools. And there are all kinds of problems with that approach.
Now he seems to be moving closer to Ilya’s way of thinking.
If he is comfortable with the idea of being replaced by an AI CEO of OpenAI, it seems to me that this indicates that the aim to steer and control superintelligent AIs is being abandoned.
Thanks for the overview and for the link to that old LiveJournal post by Scott Alexander!
So, Ilya is no longer talking to us, and Sam is talking, but “does not tell us the whole truth”, to say the least.
Yet, I think this interview indicates that Sam’s thinking is evolving. His earlier thinking has been closer to the old OpenAI “official superalignment position”, namely to aim to steer and control superintelligent AI systems, which should be thought of as (very powerful) tools. And there are all kinds of problems with that approach.
Now he seems to be moving closer to Ilya’s way of thinking.
If he is comfortable with the idea of being replaced by an AI CEO of OpenAI, it seems to me that this indicates that the aim to steer and control superintelligent AIs is being abandoned.
And his musings about AI which “really helps you to figure out what your true goals in life are” do resonate quite a bit with the second point of Ilya’s thinking here (the “Second Challenge”): https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a
So, to the extent that we think Ilya’s approach as sketched in 2023 makes better sense than the mainstream approach, this might be a positive sign...