“I’m afraid that this may be quite a likely outcome if we don’t make much progress in alignment research.”
Ok, I understand better your position now. That is, that we shouldn’t worry so much about what to tell the genie in the lamp, because we probably won’t even have a say to begin with. Sorry for not quite getting there at first.
That sounds reasonable to me.
Personally I (also?) think that the right “values” and the right training is more important. After all, as Stuart Russell would say, building an advanced agent as an utility maximizer would always produce chaos anyway, since it would tend to set the remaining function variables that it is not maximizing to absurd parameters.
That is, that we shouldn’t worry so much about what to tell the genie in the lamp, because we probably won’t even have a say to begin with.
I think you summarized it quite well, thanks! The idea written like that is more clear than what I wrote, so I’ll probably try to edit the article to include this claim explicitly like that. This really is what motivated me to write this post to begin with.
Personally I (also?) think that the right “values” and the right training is more important.
You can put the also, I agree with you.
At the current state of confusion regarding this matter I think we should focus on how values might be shaped by the architecture and training regimes, and try to make progress on that even if we don’t know exactly what the human values are or what utility functions they represent.
“I’m afraid that this may be quite a likely outcome if we don’t make much progress in alignment research.”
Ok, I understand better your position now. That is, that we shouldn’t worry so much about what to tell the genie in the lamp, because we probably won’t even have a say to begin with. Sorry for not quite getting there at first.
That sounds reasonable to me.
Personally I (also?) think that the right “values” and the right training is more important. After all, as Stuart Russell would say, building an advanced agent as an utility maximizer would always produce chaos anyway, since it would tend to set the remaining function variables that it is not maximizing to absurd parameters.
I think you summarized it quite well, thanks! The idea written like that is more clear than what I wrote, so I’ll probably try to edit the article to include this claim explicitly like that. This really is what motivated me to write this post to begin with.
You can put the also, I agree with you.
At the current state of confusion regarding this matter I think we should focus on how values might be shaped by the architecture and training regimes, and try to make progress on that even if we don’t know exactly what the human values are or what utility functions they represent.