“You cannot predict, in advance, which of your values will be needed to judge the path through time that the genie takes.… The only safe genie is a genie that shares all your judgment criteria.”
Is a genie that does share all my judgment criteria necessarily safe?
Maybe my question is ill-formed; I am not sure what “safe” could mean besides “a predictable maximizer of my judgment criteria”. But I am concerned that human judgment under ordinary circumstances increases some sort of Beauty/Value/Coolness which would not be increased if that same human judgment was used to search over a less restricted set of possibilities.
The world is full of cases where selecting for A automatically increases B when you are searching over a restricted set of possibilities but does not increase B when those restrictions are lifted. Overfitting is a classic example. In cases of overfitting, if we search only over a restricted set of few-parameter models, models that do well on the training set will automatically do well on the generalization set, but if we allow more parameters the correlation disappears.
Modern marketing / product development can search over a larger set of alternatives than we used to have access to. In many cases human judgments correlate with less when used on modern manufactured goods than when used on the smaller set of goods that was formerly available. Judgments of tastiness used to correlate with health but now do not. Judgments of “this is a limited resource which I should grab quickly” used to indicate resources which we really should grab quickly but now do not (because of manufactured “limited time offer only” signs and the like).
Genies or AGI’s would search over an even larger space of possibilities than contemporary marketing searches over. In this larger space, many of the traditional correlates of human judgment will disappear. That is: in today’s restricted search spaces, outcomes which are ranked highly according to human judgment criteria tend also to have various other properties P1, P2, … Pk. In an AGI’s search space, outcomes which are ranked highly according to human judgment criteria will not have properties P1… Pk.
I am worried that properties P1...Pk are somehow valuable. That is, I am worried that in this world human judgments pick out outcomes that are somehow valuable and that human judgments’ ability to do this resides, not in our judgment criteria alone (which would be uploaded into our imagined genie) but in the conjunction of our judgment criteria with the restricted set of possibilities that has so far been available to us.
“You cannot predict, in advance, which of your values will be needed to judge the path through time that the genie takes.… The only safe genie is a genie that shares all your judgment criteria.”
Is a genie that does share all my judgment criteria necessarily safe?
Maybe my question is ill-formed; I am not sure what “safe” could mean besides “a predictable maximizer of my judgment criteria”. But I am concerned that human judgment under ordinary circumstances increases some sort of Beauty/Value/Coolness which would not be increased if that same human judgment was used to search over a less restricted set of possibilities.
The world is full of cases where selecting for A automatically increases B when you are searching over a restricted set of possibilities but does not increase B when those restrictions are lifted. Overfitting is a classic example. In cases of overfitting, if we search only over a restricted set of few-parameter models, models that do well on the training set will automatically do well on the generalization set, but if we allow more parameters the correlation disappears.
Modern marketing / product development can search over a larger set of alternatives than we used to have access to. In many cases human judgments correlate with less when used on modern manufactured goods than when used on the smaller set of goods that was formerly available. Judgments of tastiness used to correlate with health but now do not. Judgments of “this is a limited resource which I should grab quickly” used to indicate resources which we really should grab quickly but now do not (because of manufactured “limited time offer only” signs and the like).
Genies or AGI’s would search over an even larger space of possibilities than contemporary marketing searches over. In this larger space, many of the traditional correlates of human judgment will disappear. That is: in today’s restricted search spaces, outcomes which are ranked highly according to human judgment criteria tend also to have various other properties P1, P2, … Pk. In an AGI’s search space, outcomes which are ranked highly according to human judgment criteria will not have properties P1… Pk.
I am worried that properties P1...Pk are somehow valuable. That is, I am worried that in this world human judgments pick out outcomes that are somehow valuable and that human judgments’ ability to do this resides, not in our judgment criteria alone (which would be uploaded into our imagined genie) but in the conjunction of our judgment criteria with the restricted set of possibilities that has so far been available to us.