The full conversation is a bit long and difficult to skim. I haven’t finished reading it myself, but in it LeCun links to an article he co-authored for Scientific American which argues x-risk from AI misalignment isn’t something people should worry about. (He’s more concerned about misuse risks.) Here’s a quote from it:
We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. [...] But intelligence per se does not generate the drive for domination, any more than horns do.”
My read of LeCun in that conversation is that he doesn’t think in terms of outer alignment / value alignment at all, but rather in terms of implementing a series of “safeguards” that allow humans to recover if the AI behaves poorly (See Steven Byrnes’ summary).
I think this paper helps clarify why he believes this—he had something like this architecture in mind, and so outer alignment seemed basically impossible. Independently, he believes it’s unnecessary because the obvious safeguards will prove sufficient.
There’s a conversation LeCun had with Stuart Russell and a few others in a Facebook comment thread back in 2019, arguing about instrumental convergence.
The full conversation is a bit long and difficult to skim. I haven’t finished reading it myself, but in it LeCun links to an article he co-authored for Scientific American which argues x-risk from AI misalignment isn’t something people should worry about. (He’s more concerned about misuse risks.) Here’s a quote from it:
My read of LeCun in that conversation is that he doesn’t think in terms of outer alignment / value alignment at all, but rather in terms of implementing a series of “safeguards” that allow humans to recover if the AI behaves poorly (See Steven Byrnes’ summary).
I think this paper helps clarify why he believes this—he had something like this architecture in mind, and so outer alignment seemed basically impossible. Independently, he believes it’s unnecessary because the obvious safeguards will prove sufficient.