Yes, it’s a difficult problem for a layman to know how alarmed to be. I’m in the AI field, and I’ve thought that superhuman AI was a threat since about 2003. I’d be glad to engage you in an offline object-level discussion about it, comprehensible to a layman, if you think that would help. I have some experience in this, having engaged in many such discussions. It’s not complicated or technical, if you explain it right.
I don’t have a general theory for why people disagree with me, but here are several counter arguments I have encountered. I phrase them as though they were being suggested to me, so “you” is actually me.
— Robots taking over sounds nuts, so you must be crazy.
— This is an idea from a science fiction movie. You’re not a serious person.
— People often predict the end of the world, and they’ve always been wrong before. And often been psychologically troubled. Are you seeing a therapist?
— Why don’t any of the top people in your field agree? Surely if this were a serious problem, they’d be all over it. (don’t hear this one much any more.)
— AIs won’t be dangerous, because nobody would be so foolish as to design them that way. Or to build AIs capable of long term planning, or to direct AIs toward foolish or harmful goals. Or various other sentences containing the phrase “nobody would be so foolish as to”. — AIs will have to obey the law, so we don’t have to worry about them killing people or taking over, because those things are illegal. (Yes, I’ve actually heard this one.)
— Various principles of computer science show that it is impossible to build a machine that makes correct choices in all circumstances. (This is where the “no free lunch“ theorem comes in. Of course, we’re not proposing a machine that makes correct choices in all circumstances, just one that makes mostly correct choices in the circumstances it encounters.)
— There will be lots of AIs, and the good ones will outnumber the bad ones and hence win.
— It’s impossible to build a machine with greater-than-human intelligence, because of <philosophical principle here>.
— Greater wisdom leads to greater morality, so a superhuman AI is guaranteed beneficent.
— If an AI became dangerous, I would just unplug it. Yes, I’d be able to spot it, and no, the AI wouldn’t be able to talk me out of it, or otherwise stop me.
— Machines can never become conscious. Which implies safety, somehow.
— Present-day AIs are obviously not able to take over the world. They’re not even scary. You’re foolishly over-reacting.
— The real problem of AI is <something else, usually something already happening>. You’re distracting people with your farfetched speculation.
— My whole life, people have been decrying technological advances and saying they were bad, and they’ve always been wrong. You must be one of those Luddites we keep hearing about.
— If it becomes a problem, people will take care of it.
— My paycheck depends on my not agreeing with you. (I’ve been working on this one— convincing my friends in the AI business to retreat from frontier development. Results are mixed.)
— Superhuman machines offer vast payoff! We must press ahead regardless.
— If humans are defeated, that’s good actually, because evolution is good.
Many of these are good arguments, but unfortunately they’re all wrong.
I tend to concentrate on extinction, as the most massive and terrifying of risks. I think that smaller problems can be dealt with by the usual methods, like our society has dealt with lots of things. Which is not to say that they aren’t real problems, that do real harm, and require real solutions. My disagreement is with “You’re distracting people with your farfetched speculation.” I don’t think raising questions of existential risk makes it harder to deal with more quotidian problems. And even if it did, that’s not an argument against the reality of extinction risk.
Yes, it’s a difficult problem for a layman to know how alarmed to be. I’m in the AI field, and I’ve thought that superhuman AI was a threat since about 2003. I’d be glad to engage you in an offline object-level discussion about it, comprehensible to a layman, if you think that would help. I have some experience in this, having engaged in many such discussions. It’s not complicated or technical, if you explain it right.
I don’t have a general theory for why people disagree with me, but here are several counter arguments I have encountered. I phrase them as though they were being suggested to me, so “you” is actually me.
— Robots taking over sounds nuts, so you must be crazy.
— This is an idea from a science fiction movie. You’re not a serious person.
— People often predict the end of the world, and they’ve always been wrong before. And often been psychologically troubled. Are you seeing a therapist?
— Why don’t any of the top people in your field agree? Surely if this were a serious problem, they’d be all over it. (don’t hear this one much any more.)
— AIs won’t be dangerous, because nobody would be so foolish as to design them that way. Or to build AIs capable of long term planning, or to direct AIs toward foolish or harmful goals. Or various other sentences containing the phrase “nobody would be so foolish as to”.
— AIs will have to obey the law, so we don’t have to worry about them killing people or taking over, because those things are illegal. (Yes, I’ve actually heard this one.)
— Various principles of computer science show that it is impossible to build a machine that makes correct choices in all circumstances. (This is where the “no free lunch“ theorem comes in. Of course, we’re not proposing a machine that makes correct choices in all circumstances, just one that makes mostly correct choices in the circumstances it encounters.)
— There will be lots of AIs, and the good ones will outnumber the bad ones and hence win.
— It’s impossible to build a machine with greater-than-human intelligence, because of <philosophical principle here>.
— Greater wisdom leads to greater morality, so a superhuman AI is guaranteed beneficent.
— If an AI became dangerous, I would just unplug it. Yes, I’d be able to spot it, and no, the AI wouldn’t be able to talk me out of it, or otherwise stop me.
— Machines can never become conscious. Which implies safety, somehow.
— Present-day AIs are obviously not able to take over the world. They’re not even scary. You’re foolishly over-reacting.
— The real problem of AI is <something else, usually something already happening>. You’re distracting people with your farfetched speculation.
— My whole life, people have been decrying technological advances and saying they were bad, and they’ve always been wrong. You must be one of those Luddites we keep hearing about.
— If it becomes a problem, people will take care of it.
— My paycheck depends on my not agreeing with you. (I’ve been working on this one— convincing my friends in the AI business to retreat from frontier development. Results are mixed.)
— Superhuman machines offer vast payoff! We must press ahead regardless.
— If humans are defeated, that’s good actually, because evolution is good.
Many of these are good arguments, but unfortunately they’re all wrong.
I am happy to have a conversation with you. On this point:
‘— The real problem of AI is <something else, usually something already happening>. You’re distracting people with your farfetched speculation.’
I believe that AI indeed poses huge problems, so maybe this is where I sit.
I tend to concentrate on extinction, as the most massive and terrifying of risks. I think that smaller problems can be dealt with by the usual methods, like our society has dealt with lots of things. Which is not to say that they aren’t real problems, that do real harm, and require real solutions. My disagreement is with “You’re distracting people with your farfetched speculation.” I don’t think raising questions of existential risk makes it harder to deal with more quotidian problems. And even if it did, that’s not an argument against the reality of extinction risk.