“There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.”
Maybe, but that’s exactly like the orthogonality thesis. The fact that something is possible in principle doesn’t mean there’s any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like “goal”, “good,” and so on.
The reason why a human baby becomes intelligent over time is that right from the beginning it has the ability to generalize to pretty much any degree necessary. So I don’t see how that argues against my position. I would expect AIs also to require a process of “growing up” although you might be able to speed that process up so that it takes months rather than years. That is still another reason why the orthogonality thesis is false in practice. AIs that grow up among human beings will grow up with relatively humanlike values (although not exactly human), and the fact that arbitrary values are possible in principle will not make them actual.
The fact that something is possible in principle doesn’t mean there’s any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like “goal”, “good,” and so on.
I actually had specific examples in mind, basically all GOFAI approaches to general AI. But in any case this logic doesn’t seem to hold up. You could argue that something needs to HAVE goals in order to be intelligent—I don’t think so, at least not with the technical definition typically given to ‘goals’, but I will grant it for the purpose of discussion. It still doesn’t follow that the thing has to be aware of these goals, or introspective of them. One can have goals without being aware that one has them, or able to represent those goals explicitly. Most human beings fall in this category most of the time, it is sad to say.
I am saying the opposite. Having a goal, in Eliezer’s sense, is contrary to being intelligent. That is, doing everything you do for the sake of one thing and only one thing, and not being capable of doing anything else, is the behavior of an idiotic fanatic, not of an intelligent being.
I said that to be intelligent you need to understand the concept of a goal. That does not mean having one; in fact it means the ability to have many different goals, because your general understanding enables you to see that there is nothing forcing you to pursue one particular goal fanatically.
Do you mean how do you decide which goal to choose? Many different causes. For example if someone tells you that something is good, you might do it, just because you trust them and they told you it was good. They don’t even have to say what goal it will accomplish, other than the fact that it will be something good.
Note that when you do that, you are not trying to accomplish any particular goal, other than “something good,” which is completely general, and could be paperclips, for all you know, if the person who told you that was a paperclipper, and might be something entirely different.
“There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.”
Maybe, but that’s exactly like the orthogonality thesis. The fact that something is possible in principle doesn’t mean there’s any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like “goal”, “good,” and so on.
The reason why a human baby becomes intelligent over time is that right from the beginning it has the ability to generalize to pretty much any degree necessary. So I don’t see how that argues against my position. I would expect AIs also to require a process of “growing up” although you might be able to speed that process up so that it takes months rather than years. That is still another reason why the orthogonality thesis is false in practice. AIs that grow up among human beings will grow up with relatively humanlike values (although not exactly human), and the fact that arbitrary values are possible in principle will not make them actual.
I actually had specific examples in mind, basically all GOFAI approaches to general AI. But in any case this logic doesn’t seem to hold up. You could argue that something needs to HAVE goals in order to be intelligent—I don’t think so, at least not with the technical definition typically given to ‘goals’, but I will grant it for the purpose of discussion. It still doesn’t follow that the thing has to be aware of these goals, or introspective of them. One can have goals without being aware that one has them, or able to represent those goals explicitly. Most human beings fall in this category most of the time, it is sad to say.
I am saying the opposite. Having a goal, in Eliezer’s sense, is contrary to being intelligent. That is, doing everything you do for the sake of one thing and only one thing, and not being capable of doing anything else, is the behavior of an idiotic fanatic, not of an intelligent being.
I said that to be intelligent you need to understand the concept of a goal. That does not mean having one; in fact it means the ability to have many different goals, because your general understanding enables you to see that there is nothing forcing you to pursue one particular goal fanatically.
Smells like a homunculus. What guides your reasoning about your goals?
Do you mean how do you decide which goal to choose? Many different causes. For example if someone tells you that something is good, you might do it, just because you trust them and they told you it was good. They don’t even have to say what goal it will accomplish, other than the fact that it will be something good.
Note that when you do that, you are not trying to accomplish any particular goal, other than “something good,” which is completely general, and could be paperclips, for all you know, if the person who told you that was a paperclipper, and might be something entirely different.