Well, Goertzel’s argument is pretty much bulletproof-correct when it comes to learning algorithms like the ones he works at, where the goal is essentially set by training, alongside with human culture and human notion of stupid goal. I.e. the AI that reuses human culture as a foundation for superhuman intelligence.
Ultimately, orthogonality dissolves once you start being specific what intelligence we’re talking of—assume that it has speed of light lag and is not physically very small, and it dissolves, assume that it is learning algorithm that gets to adult human level by absorbing human culture, and it dissolves, etc etc. The orthogonality thesis is only correct in the sense that being entirely ignorant of the specifics of what the ‘intelligence’ is you can’t attribute any qualities to it, which is trivially correct.
Well, Goertzel’s argument is pretty much bulletproof-correct when it comes to learning algorithms like the ones he works at, where the goal is essentially set by training, alongside with human culture and human notion of stupid goal. I.e. the AI that reuses human culture as a foundation for superhuman intelligence.
Ultimately, orthogonality dissolves once you start being specific what intelligence we’re talking of—assume that it has speed of light lag and is not physically very small, and it dissolves, assume that it is learning algorithm that gets to adult human level by absorbing human culture, and it dissolves, etc etc. The orthogonality thesis is only correct in the sense that being entirely ignorant of the specifics of what the ‘intelligence’ is you can’t attribute any qualities to it, which is trivially correct.