Without getting in to the likelihood of a ‘typical AI researcher’ successfully creating a benevolent AI, do you doubt Goertzel’s “Interdependency Thesis”? I find both to be rather obviously true. Yes its possible in principle for almost any goal system to be combined with almost any type or degree of intelligence, but that’s irrelevant because in practice we can expect the distributions over both to be highly correlated in some complex fashion.
I really don’t understand why this Orthogonality idea is still brought up so much on LW. It may be true, but it doesn’t lead to much.
The space of all possible minds or goal systems is about as relevant to the space of actual practical AIs as the space of all configuration of a human’s molecules is to the space of a particular human’s set of potential children.
Without getting in to the likelihood of a ‘typical AI researcher’ successfully creating a benevolent AI, do you doubt Goertzel’s “Interdependency Thesis”? I find both to be rather obviously true. Yes its possible in principle for almost any goal system to be combined with almost any type or degree of intelligence, but that’s irrelevant because in practice we can expect the distributions over both to be highly correlated in some complex fashion.
I really don’t understand why this Orthogonality idea is still brought up so much on LW. It may be true, but it doesn’t lead to much.
The space of all possible minds or goal systems is about as relevant to the space of actual practical AIs as the space of all configuration of a human’s molecules is to the space of a particular human’s set of potential children.