AI is a grand quest. We’re trying to understand how people work, we’re trying to make people, we’re trying to make ourselves powerful. This is a profound intellectual milestone. It’s going to change everything… It’s just the next big step. I think this is just going to be good. Lot’s of people are worried about it—I think it’s going to be good, an unalloyed good.
Introductory remarks from his recent lecture on the OaK Architecture.
“Richard Sutton rejects AI Risk” seems misleading in my view. What risks is he rejecting specifically?
His view seems to be that AI will replace us, humanity as we know it will go extinct, and that is okay. E.g., here he speaks positively of a Moravec quote, “Rather quickly, they could displace us from existence”. Most would consider our extinction as a risk they are referring to when they say “AI Risk”.
I didn’t know that when posting this comment, but agree that that’s a better description of his view! I guess the ‘unalloyed good’ he’s talking about involves the extinction of humanity.
If it helps, I criticized Richard Sutton RE alignment here, and he replied on X here, and I replied back here.
Also, Paul Christiano mentions an exchange with him here:
[Sutton] agrees that all else equal it would be better if we handed off to human uploads instead of powerful AI. I think his view is that the proposed course of action from the alignment community is morally horrifying (since in practice he thinks the alternative is “attempt to have a slave society,” not “slow down AI progress for decades”—I think he might also believe that stagnation is much worse than a handoff but haven’t heard his view on this specifically) and that even if you are losing something in expectation by handing the universe off to AI systems it’s not as bad as the alternative.
Richard Sutton rejects AI Risk.
Introductory remarks from his recent lecture on the OaK Architecture.
“Richard Sutton rejects AI Risk” seems misleading in my view. What risks is he rejecting specifically?
His view seems to be that AI will replace us, humanity as we know it will go extinct, and that is okay. E.g., here he speaks positively of a Moravec quote, “Rather quickly, they could displace us from existence”. Most would consider our extinction as a risk they are referring to when they say “AI Risk”.
I didn’t know that when posting this comment, but agree that that’s a better description of his view! I guess the ‘unalloyed good’ he’s talking about involves the extinction of humanity.
Yes. And this actually seems to be a relatively common perspective from what I’ve seen.
If it helps, I criticized Richard Sutton RE alignment here, and he replied on X here, and I replied back here.
Also, Paul Christiano mentions an exchange with him here: