if you design an AI using “shallow insights” without an explicit goal-directed architecture—some program that “just happens” to make intelligent decisions that can be viewed by us as fulfilling certain goals
I think that that’s where you’re looking at it differently from Eliezer et al. I think that Eliezer at least is talking about an AI which has goals, but does not, when it starts modifying itself, understand itself well enough to keep them stable. Once it gets good enough at self modification to keep its goals stable, it will do so, and they will be frozen indefinitely.
(This is just a placeholder explanation. I hope that someone clever and wise will come in and write a better one.)
I think that that’s where you’re looking at it differently from Eliezer et al. I think that Eliezer at least is talking about an AI which has goals, but does not, when it starts modifying itself, understand itself well enough to keep them stable. Once it gets good enough at self modification to keep its goals stable, it will do so, and they will be frozen indefinitely.
(This is just a placeholder explanation. I hope that someone clever and wise will come in and write a better one.)