I intuitively agree with your answer. Avturchin also commented saying something close (he said 2019, but for different reasons). Therefore, I think I might not be communicating clearly my confusion.
I don’t remember exactly when, but there was some debates between Yann Le Cun and AI Alignment folks on a Fb group (maybe AI Safety discussion “open” a few months ago). What stroke me was how confident LeCun was about long timelines. I think, for him, the 1% would be in at least 10 years. How do you explain that someone who has access to private information (e.g. at FAIR) might have timelines so different than yours?
Meta: Thanks for expressing clearly your confidence levels through your writing with “hard”, “maybe” and “should”: it’s very efficient.
Either he’s not trying to be calibrated, or he’s not good at being calibrated, probably the former. Like, my inside view also screams fairly loudly that AGI in 2020 is never going to happen—but assigning 99% confidence to my inside view is far too much confidence. I expect LeCun is mostly trying to communicate what his inside view is confident about.
There are lots of good non-alignment ML researchers whose timelines are much much shorter (including many working at DeepMind and OpenAI). Of course, it could be that they are the ones who are wrong and LeCun is right, but I don’t see a particularly compelling reason to make that judgment.
I intuitively agree with your answer. Avturchin also commented saying something close (he said 2019, but for different reasons). Therefore, I think I might not be communicating clearly my confusion.
I don’t remember exactly when, but there was some debates between Yann Le Cun and AI Alignment folks on a Fb group (maybe AI Safety discussion “open” a few months ago). What stroke me was how confident LeCun was about long timelines. I think, for him, the 1% would be in at least 10 years. How do you explain that someone who has access to private information (e.g. at FAIR) might have timelines so different than yours?
Meta: Thanks for expressing clearly your confidence levels through your writing with “hard”, “maybe” and “should”: it’s very efficient.
EDIT: Le Cun thread: https://www.facebook.com/groups/aisafety/permalink/1178285709002208/
Either he’s not trying to be calibrated, or he’s not good at being calibrated, probably the former. Like, my inside view also screams fairly loudly that AGI in 2020 is never going to happen—but assigning 99% confidence to my inside view is far too much confidence. I expect LeCun is mostly trying to communicate what his inside view is confident about.
There are lots of good non-alignment ML researchers whose timelines are much much shorter (including many working at DeepMind and OpenAI). Of course, it could be that they are the ones who are wrong and LeCun is right, but I don’t see a particularly compelling reason to make that judgment.