AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter.
“Trying to” normally implies intent.
I’ll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn’t believe it is an actual threat, could be viewed as irresponsible—unless they have reasoned grounds to doubt that their creation would be dangerous.
Even so, “trying to destroy the world” strikes me as hyperbole. People don’t typically say that the Project Manhattan scientists were “trying to destroy the world” even though some of them thought there was an outside chance it would do just that.
On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think “nah, those AI scientists wouldn’t go ahead with their plans if they thought there was even the slimmest chance of killing everyone”.
“Trying to” normally implies intent.
I’ll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn’t believe it is an actual threat, could be viewed as irresponsible—unless they have reasoned grounds to doubt that their creation would be dangerous.
Even so, “trying to destroy the world” strikes me as hyperbole. People don’t typically say that the Project Manhattan scientists were “trying to destroy the world” even though some of them thought there was an outside chance it would do just that.
On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think “nah, those AI scientists wouldn’t go ahead with their plans if they thought there was even the slimmest chance of killing everyone”.