LeCunn’s position is so bad I can’t help but feel like it’s a typical example of Sinclair’s Law: “it is difficult to get a man to understand something, when his salary depends on his not understanding it”.
I would assume that LeCunn assumes that someone from Facebook HR is reading every tweet he posts, and that his tweets are written at least partly for that audience. That’s an even stronger scenario than Sinclair’s description, which talks about what the man believes in the privacy of his own mind, as opposed to what he says in public, in writing, under his real name. In this circumstance… there are some people who would say whatever they believed even if it was bad for their company, but I’d guess it’s <10% of people. I don’t think I would do so, although that might be partly because pseudonymous communication works fine.
If he was so gagged and couldn’t speak his real mind, he could simply not speak at all. I don’t think Meta gives him detailed instructions about how much time he has to spend on Twitter arguing against and ridiculing people worried about AI safety to such an extent. This feels like a personal chip on the shoulder for him, from someone who’s seen his increasingly dismissive takes on the topic during the last weeks.
Yeah, that’s true. Still, in the process of such arguing, he could run into an individual point that he couldn’t think of a good argument against. At that moment, I could see him being tempted to say “Hmm, all right, that’s a fair point”, then thinking about HR asking him to explain why he posted that, and instead resorting to “Your fearmongering is hurting people”. (I think the name is “argument from consequences”.)
I would assume that LeCunn assumes that someone from Facebook HR is reading every tweet he posts, and that his tweets are written at least partly for that audience. That’s an even stronger scenario than Sinclair’s description, which talks about what the man believes in the privacy of his own mind, as opposed to what he says in public, in writing, under his real name. In this circumstance… there are some people who would say whatever they believed even if it was bad for their company, but I’d guess it’s <10% of people. I don’t think I would do so, although that might be partly because pseudonymous communication works fine.
If he was so gagged and couldn’t speak his real mind, he could simply not speak at all. I don’t think Meta gives him detailed instructions about how much time he has to spend on Twitter arguing against and ridiculing people worried about AI safety to such an extent. This feels like a personal chip on the shoulder for him, from someone who’s seen his increasingly dismissive takes on the topic during the last weeks.
Yeah, that’s true. Still, in the process of such arguing, he could run into an individual point that he couldn’t think of a good argument against. At that moment, I could see him being tempted to say “Hmm, all right, that’s a fair point”, then thinking about HR asking him to explain why he posted that, and instead resorting to “Your fearmongering is hurting people”. (I think the name is “argument from consequences”.)