[draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary

Here’s my draft document Concepts are Difficult, and Unfriendliness is the Default. (Google Docs, commenting enabled.) Despite the name, it’s still informal and would need a lot more references, but it could be written up to a proper paper if people felt that the reasoning was solid.

Here’s my introduction:

In the “Muehlhauser-Goertzel Dialogue, Part 1”, Ben Goertzel writes:

[Anna Salamon] gave the familiar SIAI argument that, if one picks a mind at random from “mind space”, the odds that it will be Friendly to humans are effectively zero.


I made the familiar counter-argument that this is irrelevant, because nobody is advocating building a random mind. Rather, what some of us are suggesting is to build a mind with a Friendly-looking goal system, and a cognitive architecture that’s roughly human-like in nature but with a non-human-like propensity to choose its actions rationally based on its goals, and then raise this AGI mind in a caring way and integrate it into society. Arguments against the Friendliness of random minds are irrelevant as critiques of this sort of suggestion.


[...] Over all these years, the SIAI community maintains the Scary Idea in its collective mind, and also maintains a great devotion to the idea of rationality, but yet fails to produce anything resembling a rational argument for the Scary Idea—instead repetitiously trotting out irrelevant statements about random minds!!


Ben has a valid complaint here. Therefore, I’ll attempt to formalize the arguments for the following conclusion:

Even if an AGI is explicitly built to have a Friendly-looking goal system, and a cognitive architecture that’s roughly human-like in nature but with a non-human-like propensity to choose its actions rationally based on its goals, and this AGI mind is raised in caring way in an attempt to integrate it into society, there is still a very large chance of creating a mind that is unFriendly.


First, I’ll outline my argument, and then expand upon each specific piece in detail.

The premises in outline

0. There will eventually be a situation where the AGI’s goals and behaviors are no longer under our control.

1. Whether or not the AGI will eventually come to understand what we wanted it to do is irrelevant, if that understanding does not guide its actions in ”the right way”.

2. Providing an AGI with the kind of understanding that’d guide its actions in ”the right way” requires some way of defining our intentions.

3. In addition to defining what counts as our intentions, we also need to define the concepts that make up those intentions.

4. Any difference between the way we understand concepts and the way that they are defined by the AGI is something that the AGI may exploit, with likely catastrophic results.

5. Common-sense concepts are complicated and allow for many degrees of freedom: fully satisfactory definitions for most concepts do not exist.

6. Even if an AGI seemed to learn our concepts, without human inductive biases it would most likely mislearn them.

7. AGI concepts are likely to be opaque and hard to understand, making proper verification impossible.

And here’s my conclusion:

Above, I have argued that an AGI will only be Friendly if its goals are the kinds of goals that we would want it to have, and it will only have the kinds of goals that we would want it to have if the concepts that it bases its goals on are sufficiently similar to the concepts that we use. Even subtle differences in the concepts will quickly lead to drastic differences – even an AGI with most of its ontology basically correct, but with a differing definition regarding the concept of ”time”, might end up destroying humanity. I have also argued that human behavioral data severly underconstrains the actual models that could be generated about human concepts, that humans do not understand the concepts they use themselves, and that an AGI developing concepts that are subtly different from those of humans is therefore unavoidable. Furthermore, AGI concepts are themselves likely to be opaque in that they cannot simply be read off the AGI, but have to be inferred in the same way that an AGI tries to infer human concepts, so humans cannot even reliably know whether an AGI that seems Friendly really is Friendly. The most likely scenario is that it is not, but there is no safe way for the humans to test this.

Presuming that one accepts this chain of reasoning, it seems like

Even if an AGI is explicitly built to have a Friendly-looking goal system, and a cognitive architecture that’s roughly human-like in nature but with a non-human-like propensity to choose its actions rationally based on its goals, and this AGI mind is raised in caring way in an attempt to integrate it into society, there is still a very large chance of creating a mind that is unFriendly.


would be a safe conclusion to accept.

For the actual argumentation defending the various premises, see the linked document. I have a feeling that there are still several conceptual distinctions that I should be making but am not, but I figured that the easiest way to find the problems would be to have people tell me what points they find unclear or disagreeable.