Reaching out to people with the problems of friendly AI

There have been a few at­tempts to reach out to broader au­di­ences in the past, but mostly in very poli­ti­cally/​ide­olog­i­cally loaded top­ics.

After see­ing sev­eral ex­am­ples of how lit­tle un­der­stand­ing peo­ple have about the difficul­ties in cre­at­ing a friendly AI, I’m hor­rified. And I’m not even talk­ing about a farmer on some hid­den ranch, but about peo­ple who should know about these things, re­searchers, soft­ware de­vel­op­ers med­dling with AI re­search, and so on.

What made me write this post, was a highly voted an­swer on stack­ex­change.com, which claims that the dan­ger of su­per­hu­man AI is a non-is­sue, and that the only way for an AI to wipe out hu­man­ity is if “some in­sane hu­man wanted that, and told the AI to find a way to do it”. And the poster claims to be work­ing in the AI field.

I’ve also seen a TEDx talk about AIs. The talker didn’t even hear about the pa­per­clip max­i­mizer, and the talk was about the dan­gers pre­sented by the AIs as de­picted in the movies, like the Ter­mi­na­tor, where an AI “rebels”, but we can hope that AIs would not rebel as they can­not feel emo­tion, so we should hope the events de­picted in such movies will not hap­pen, and all we have to do is for our­selves to be eth­i­cal and not de­liber­ately write mal­i­cious AI, and then ev­ery­thing will be OK.

The sheer and mind-bog­gling stu­pidity of this makes me want to scream.

We should find a way to in­crease pub­lic aware­ness of the difficulty of the prob­lem. The pa­per­clip max­i­mizer should be­come part of pub­lic con­scious­ness, a part of pop cul­ture. When­ever there is a rele­vant dis­cus­sion about the topic, we should men­tion it. We should in­crease aware­ness of old fairy tales with a jinn who mis­in­ter­prets wishes. What­ever it takes to in­grain the im­por­tance of these prob­lems into pub­lic con­scious­ness.

There are many peo­ple grad­u­at­ing ev­ery year who’ve never heard about these prob­lems. Or if they did, they dis­miss it as a non-is­sue, a con­tra­dic­tory thought ex­per­i­ment which can be dis­missed with­out a sec­ond though:

A nu­clear bomb isn’t smart enough to over­ride its pro­gram­ming, ei­ther. If such an AI isn’t smart enough to un­der­stand peo­ple do not want to be starved or kil­led, then it doesn’t have a hu­man level of in­tel­li­gence at any point, does it? The thought ex­per­i­ment is con­tra­dic­tory.

We don’t want our fu­ture AI re­searches to start work­ing with such a men­tal­ity.

What can we do to raise aware­ness? We don’t have the fund­ing to make a movie which be­comes a cult clas­sic. We might start down­vot­ing and com­ment­ing on the afore­men­tioned stack­ex­change post, but that would not solve much if any­thing.