You’re right, and my above comment was written in haste. I didn’t mean to imply Eliezer thought those directions were pointless, he clearly doesn’t. I do think he’s stated, when asked on here by incoming college students what they should do, something to the effect of “I don’t know, I’m sorry”. But I think I did mischaracterize him in my phrasing, and that’s my bad, I’m sorry.
My only note is that, when addressing newcomers to the AI safety world, the log-odds perspective of the benefit of working on safety requires several prerequisites that many of those folks don’t share. In particular, for those not bought into longtermism/pure utilitarianism, “dying with dignity” by increasing humanity’s odds of survival from 0.1% to 0.2% at substantial professional and emotional cost to yourself during the ~10 years you believe you still have, is not prima facie a sufficiently compelling reason to work on AI safety. In that case, arguing that from an outside view the number might not actually be so low seems an important thing to highlight to people, even if they happen to eventually update down that far upon forming an inside view.
You’re right, and my above comment was written in haste. I didn’t mean to imply Eliezer thought those directions were pointless, he clearly doesn’t. I do think he’s stated, when asked on here by incoming college students what they should do, something to the effect of “I don’t know, I’m sorry”. But I think I did mischaracterize him in my phrasing, and that’s my bad, I’m sorry.
My only note is that, when addressing newcomers to the AI safety world, the log-odds perspective of the benefit of working on safety requires several prerequisites that many of those folks don’t share. In particular, for those not bought into longtermism/pure utilitarianism, “dying with dignity” by increasing humanity’s odds of survival from 0.1% to 0.2% at substantial professional and emotional cost to yourself during the ~10 years you believe you still have, is not prima facie a sufficiently compelling reason to work on AI safety. In that case, arguing that from an outside view the number might not actually be so low seems an important thing to highlight to people, even if they happen to eventually update down that far upon forming an inside view.