LeCun handwaving AGIs being as manageable as corporations makes two big mistakes already:
underestimates all the ways in which you can improve intelligence over corporations
overestimates how well we’ve aligned corporations
Without even going as far back as the EIC (which by the way shows well how corporate law was something we came up with only in time and through many and bloody failures of alignment), when corporations like Shell purposefully hid their knowledge of climate change risks, isn’t that corporate misalignment resulting in a nigh existential risk for humanity?
It’s honestly a bad joke to hear such words from a man employed at the top of a corporation whose own dumb Engagement Maximizer behaviour has gotten it sued over being accomplice in genocide. That’s our sterling example of alignment? The man doesn’t even seem to know the recent history of his own employer.
People say Eliezer is bad at communicating and sure, he does get pointlessly technical over some relatively simple concepts there IMO. But LeCunn’s position is so bad I can’t help but feel like it’s a typical example of Sinclair’s Law: “it is difficult to get a man to understand something, when his salary depends on his not understanding it”.
LeCunn’s position is so bad I can’t help but feel like it’s a typical example of Sinclair’s Law: “it is difficult to get a man to understand something, when his salary depends on his not understanding it”.
I would assume that LeCunn assumes that someone from Facebook HR is reading every tweet he posts, and that his tweets are written at least partly for that audience. That’s an even stronger scenario than Sinclair’s description, which talks about what the man believes in the privacy of his own mind, as opposed to what he says in public, in writing, under his real name. In this circumstance… there are some people who would say whatever they believed even if it was bad for their company, but I’d guess it’s <10% of people. I don’t think I would do so, although that might be partly because pseudonymous communication works fine.
If he was so gagged and couldn’t speak his real mind, he could simply not speak at all. I don’t think Meta gives him detailed instructions about how much time he has to spend on Twitter arguing against and ridiculing people worried about AI safety to such an extent. This feels like a personal chip on the shoulder for him, from someone who’s seen his increasingly dismissive takes on the topic during the last weeks.
Yeah, that’s true. Still, in the process of such arguing, he could run into an individual point that he couldn’t think of a good argument against. At that moment, I could see him being tempted to say “Hmm, all right, that’s a fair point”, then thinking about HR asking him to explain why he posted that, and instead resorting to “Your fearmongering is hurting people”. (I think the name is “argument from consequences”.)
LeCun handwaving AGIs being as manageable as corporations makes two big mistakes already:
underestimates all the ways in which you can improve intelligence over corporations
overestimates how well we’ve aligned corporations
Without even going as far back as the EIC (which by the way shows well how corporate law was something we came up with only in time and through many and bloody failures of alignment), when corporations like Shell purposefully hid their knowledge of climate change risks, isn’t that corporate misalignment resulting in a nigh existential risk for humanity?
It’s honestly a bad joke to hear such words from a man employed at the top of a corporation whose own dumb Engagement Maximizer behaviour has gotten it sued over being accomplice in genocide. That’s our sterling example of alignment? The man doesn’t even seem to know the recent history of his own employer.
People say Eliezer is bad at communicating and sure, he does get pointlessly technical over some relatively simple concepts there IMO. But LeCunn’s position is so bad I can’t help but feel like it’s a typical example of Sinclair’s Law: “it is difficult to get a man to understand something, when his salary depends on his not understanding it”.
I would assume that LeCunn assumes that someone from Facebook HR is reading every tweet he posts, and that his tweets are written at least partly for that audience. That’s an even stronger scenario than Sinclair’s description, which talks about what the man believes in the privacy of his own mind, as opposed to what he says in public, in writing, under his real name. In this circumstance… there are some people who would say whatever they believed even if it was bad for their company, but I’d guess it’s <10% of people. I don’t think I would do so, although that might be partly because pseudonymous communication works fine.
If he was so gagged and couldn’t speak his real mind, he could simply not speak at all. I don’t think Meta gives him detailed instructions about how much time he has to spend on Twitter arguing against and ridiculing people worried about AI safety to such an extent. This feels like a personal chip on the shoulder for him, from someone who’s seen his increasingly dismissive takes on the topic during the last weeks.
Yeah, that’s true. Still, in the process of such arguing, he could run into an individual point that he couldn’t think of a good argument against. At that moment, I could see him being tempted to say “Hmm, all right, that’s a fair point”, then thinking about HR asking him to explain why he posted that, and instead resorting to “Your fearmongering is hurting people”. (I think the name is “argument from consequences”.)