Either he’s not trying to be calibrated, or he’s not good at being calibrated, probably the former. Like, my inside view also screams fairly loudly that AGI in 2020 is never going to happen—but assigning 99% confidence to my inside view is far too much confidence. I expect LeCun is mostly trying to communicate what his inside view is confident about.
There are lots of good non-alignment ML researchers whose timelines are much much shorter (including many working at DeepMind and OpenAI). Of course, it could be that they are the ones who are wrong and LeCun is right, but I don’t see a particularly compelling reason to make that judgment.
Either he’s not trying to be calibrated, or he’s not good at being calibrated, probably the former. Like, my inside view also screams fairly loudly that AGI in 2020 is never going to happen—but assigning 99% confidence to my inside view is far too much confidence. I expect LeCun is mostly trying to communicate what his inside view is confident about.
There are lots of good non-alignment ML researchers whose timelines are much much shorter (including many working at DeepMind and OpenAI). Of course, it could be that they are the ones who are wrong and LeCun is right, but I don’t see a particularly compelling reason to make that judgment.