Top 9+2 myths about AI risk

Fol­low­ing some some­what mis­lead­ing ar­ti­cles quot­ing me, I thought Id pre­sent the top 9 myths about the AI risk the­sis:

  1. That we’re cer­tain AI will doom us. Cer­tainly not. It’s very hard to be cer­tain of any­thing in­volv­ing a tech­nol­ogy that doesn’t ex­ist; we’re just claiming that the prob­a­bil­ity of AI go­ing bad isn’t low enough that we can ig­nore it.

  2. That hu­man­ity will sur­vive, be­cause we’ve always sur­vived be­fore. Many groups of hu­mans haven’t sur­vived con­tact with more pow­er­ful in­tel­li­gent agents. In the past, those agents were other hu­mans; but they need not be. The uni­verse does not owe us a des­tiny. In the fu­ture, some­thing will sur­vive; it need not be us.

  3. That un­cer­tainty means that you’re safe. If you’re claiming that AI is im­pos­si­ble, or that it will take countless decades, or that it’ll be safe… you’re not be­ing un­cer­tain, you’re be­ing ex­tremely spe­cific about the fu­ture. “No AI risk” is cer­tain; “Pos­si­ble AI risk” is where we stand.

  4. That Ter­mi­na­tor robots will be in­volved. Please? The threat from AI comes from its po­ten­tial in­tel­li­gence, not from its abil­ity to clank around slowly with an Aus­trian ac­cent.

  5. That we’re as­sum­ing the AI is too dumb to know what we’re ask­ing it. No. A pow­er­ful AI will know what we meant to pro­gram it to do. But why should it care? And if we could figure out how to pro­gram “care about what we meant to ask”, well, then we’d have safe AI.

  6. That there’s one sim­ple trick that can solve the whole prob­lem. Many peo­ple have pro­posed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them re­duce the risk enough to re­lax – and many of the tricks con­tra­dict each other (you can’t de­sign an AI that’s both a tool and so­cial­is­ing with hu­mans!).

  7. That we want to stop AI re­search. We don’t. Cur­rent AI re­search is very far from the risky ar­eas and abil­ities. And it’s risk aware AI re­searchers that are most likely to figure out how to make safe AI.

  8. That AIs will be more in­tel­li­gent than us, hence more moral. It’s pretty clear than in hu­mans, high in­tel­li­gence is no guaran­tee of moral­ity. Are you re­ally will­ing to bet the whole fu­ture of hu­man­ity on the idea that AIs might be differ­ent? That in the billions of pos­si­ble minds out there, there is none that is both dan­ger­ous and very in­tel­li­gent?

  9. That sci­ence fic­tion or spiritual ideas are use­ful ways of un­der­stand­ing AI risk. Science fic­tion and spiritu­al­ity are full of hu­man con­cepts, cre­ated by hu­mans, for hu­mans, to com­mu­ni­cate hu­man ideas. They need not ap­ply to AI at all, as these could be minds far re­moved from hu­man con­cepts, pos­si­bly with­out a body, pos­si­bly with no emo­tions or con­scious­ness, pos­si­bly with many new emo­tions and a differ­ent type of con­scious­ness, etc… An­thro­po­mor­phis­ing the AIs could lead us com­pletely astray.

Lists can­not be com­pre­hen­sive, but they can adapt and grow, adding more im­por­tant points:
  1. That AIs have to be evil to be dan­ger­ous. The ma­jor­ity of the risk comes from in­differ­ent or par­tially nice AIs. Those that have some goal to fol­low, with hu­man­ity and its de­sires just get­ting in the way – us­ing re­sources, try­ing to op­pose it, or just not be­ing perfectly effi­cient for its goal.

  2. That we be­lieve AI is com­ing soon. It might; it might not. Even if AI is known to be in the dis­tant fu­ture (which isn’t known, cur­rently), some of the ground­work is worth lay­ing now.