New article from Oren Etzioni

(Cross-posted from EA Forum.)

This just appeared in this week’s MIT Technology Review: Oren Etzioni, “How to know if AI is about to destroy civilization.” Etzioni is a noted skeptic of AI risk. Here are some things I jotted down:

Etzioni’s key points /​ arguments:

  • Warning signs that AGI is coming soon (like canaries in a coal mine, where if they start dying we should get worried)

    • Automatic formulation of learning problems

    • Fully self-driving cars

    • AI doctors

    • Limited versions of the Turing test (like Winograd Schemas)

      • If we get to the Turing test itself then it’ll be too late

    • [Note: I think if we get to practically deployed fully self-driving cars and AI doctors, then we will have already had to solve more limited versions of AI safety. It’s a separate debate whether those solutions would scale up to AGI safety though. We might also get the capabilities without actually being able to deploy them due to safety concerns.]

  • We are decades away from the versatile abilities of a 5 year old

  • Preparing anyway even if it’s very low probability because of extreme consequences is Pascal’s Wager

    • [Note: This is a decision theory question, and I don’t think that’s his area of expertise. I’ve researched PW extensively, and it’s not at all clear to me where to draw the line between low probability—high consequence scenarios that we should be factoring into our decisions, vs. very low probability – very high consequence that we should not factor into our decisions. I’m not sure there is any principled way of drawing a line between those, which might be a problem if it turns out that AI risk is a borderline case.]

  • If and when a canary “collapses” we will have ample time to design off switches and identify red lines we don’t want AI to cross

  • “AI eschatology without empirical canaries is a distraction from addressing existing issues like how to regulate AI’s impact on employment or ensure that its use in criminal sentencing or credit scoring doesn’t discriminate against certain groups.”

  • Agrees with Andrew Ng that it’s too far off to worry about now

But he seems to agree with the following:

  • If we don’t end up doing anything about it then yes, superintelligence would be incredibly dangerous

  • If we get to human level AI then superintelligence will be very soon afterwards so it’ll be too late at that point

  • If it were a lot sooner (as other experts expect) then it sounds like he would agree with the alarmists

  • Even if it was more than a tiny probability then again it sounds like he’d agree because he wouldn’t consider it Pascal’s Wager

  • If there’s not ample time between “canaries collapsing” and AGI (as I think other experts expect) then we should be worried a lot sooner

  • If it wouldn’t distract from other issues like regulating AI’s impact on employment, it sounds like he might agree that it’s reasonable to put some effort into it (although this point is a little less clear)

See also Eliezer Yudkowsky, “There’s no fire alarm for Artificial General Intelligence