it is possible that we could train an AI against a proxy, and it would end up pointing to actual human values instead, simply due to imperfect optimization during training. I think that’s what you have in mind
Kind of, but not exactly.
I think that whatever proxy is learned will not be a perfect pointer. I don’t know if there is such a thing as a “perfect pointer”, given that I don’t think there is a “right” answer to the question of what human values are, and consequently I don’t think there is a “right” answer to what is helpful vs. not helpful.
I think the learned proxy will be a good enough pointer that the agent will not be actively trying to kill us all, will let us correct it, and will generally do useful things. It seems likely that if the agent was magically scaled up a lot, then bad things could happen due to the errors in the pointer. But I’d hope that as the agent scales up, we improve and correct the pointer (where “we” doesn’t have to be just humans; it could also include other AI assistants).
Kind of, but not exactly.
I think that whatever proxy is learned will not be a perfect pointer. I don’t know if there is such a thing as a “perfect pointer”, given that I don’t think there is a “right” answer to the question of what human values are, and consequently I don’t think there is a “right” answer to what is helpful vs. not helpful.
I think the learned proxy will be a good enough pointer that the agent will not be actively trying to kill us all, will let us correct it, and will generally do useful things. It seems likely that if the agent was magically scaled up a lot, then bad things could happen due to the errors in the pointer. But I’d hope that as the agent scales up, we improve and correct the pointer (where “we” doesn’t have to be just humans; it could also include other AI assistants).