IMO, Andrew Ng is the most important name that could have been there but isn’t. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Ng believes the problem is “50 years” down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today’s world.
IMO, Andrew Ng is the most important name that could have been there but isn’t. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
For anyone who wasn’t aware both Ng and LeCun have strongly indicated that they don’t believe people existential risks from AI are a priority. Summary here
You can also check out Yann’s twitter.
Ng believes the problem is “50 years” down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today’s world.
He posted on a twitter a request to talk to people who feel strongly here.