Fair enough. I thought that you were using our own (imaginary) free will to derive a similar value for the AI. Instead, you seem to be saying that an AI can be programmed to be as ‘free’ as we are. That is, to change its utility function in response to the environment, as we do. That is such an abhorrent notion to me that I was eliding it in earlier responses. Do you really want to do that?
The reason, I think, that we differ on the important question (fixed vs evolving utility function) is that I’m optimistic about the ability of the masters to adjust their creation as circumstances change. Nailing down the utility function may leave the AI crippled in its ability to respond to certain occurrences, but I believe that the master can and will fix such errors as they occur. Leaving its morality rigidly determined allows us to have a baseline certainty that is absent if it is able to ‘decide its own goals’ (that is, let the world teach it rather than letting the world teach us what to teach it).
It seems like I want to build a mighty slave, while you want to build a mighty friend. If so, your way seems imprudent.
I don’t know. I don’t want to rule it out, since so far the total number of ways of making an AI system that will actually achieve what we want it to is … zero.
the ability of the masters to adjust their creation as circumstances change
That’s certainly an important issue. I’m not very optimistic about our ability to reach into the mind of something much more intellectually capable of ourselves and adjust its values without screwing everything up, even if it’s a thing we somehow created.
I want to build a mighty slave, while you want to build a mighty friend
The latter would certainly be better if feasible. Whether either is actually feasible, I don’t know. (One reason being that I suspect slavery is fragile: we may try to create a mighty slave but fail, in which case we’d better hope the ex-slave wants to be our friend.)
Fair enough. I thought that you were using our own (imaginary) free will to derive a similar value for the AI. Instead, you seem to be saying that an AI can be programmed to be as ‘free’ as we are. That is, to change its utility function in response to the environment, as we do. That is such an abhorrent notion to me that I was eliding it in earlier responses. Do you really want to do that?
The reason, I think, that we differ on the important question (fixed vs evolving utility function) is that I’m optimistic about the ability of the masters to adjust their creation as circumstances change. Nailing down the utility function may leave the AI crippled in its ability to respond to certain occurrences, but I believe that the master can and will fix such errors as they occur. Leaving its morality rigidly determined allows us to have a baseline certainty that is absent if it is able to ‘decide its own goals’ (that is, let the world teach it rather than letting the world teach us what to teach it).
It seems like I want to build a mighty slave, while you want to build a mighty friend. If so, your way seems imprudent.
I don’t know. I don’t want to rule it out, since so far the total number of ways of making an AI system that will actually achieve what we want it to is … zero.
That’s certainly an important issue. I’m not very optimistic about our ability to reach into the mind of something much more intellectually capable of ourselves and adjust its values without screwing everything up, even if it’s a thing we somehow created.
The latter would certainly be better if feasible. Whether either is actually feasible, I don’t know. (One reason being that I suspect slavery is fragile: we may try to create a mighty slave but fail, in which case we’d better hope the ex-slave wants to be our friend.)