By “interpret these things correctly”, do you mean linguistic competence, or a goal?
A goal. If the AI becomes superintelligent, then it will develop linguistic competence as needed. But I see no way of coding it so that that competence is reflected in its motivation (and it’s not from lack of searching for ways of doing that).
So is it safe to run AIXI approximations in boxes today?
By code it, do you mean “code, train, or evolve it”?
Note that we dont know much about coding higher level goals in general.
Note that “get things right except where X is concerned” is more complex than “get things right”. Humans do the former because of bias. The less anthropic nature of an .AI might be to our advantage.
I’m not so sure about that… AIXI can learn certain ways of behaving as if it were part of the universe, even with the Cartesian dualism in its code: http://lesswrong.com/lw/8rl/would_aixi_protect_itself/
A goal. If the AI becomes superintelligent, then it will develop linguistic competence as needed. But I see no way of coding it so that that competence is reflected in its motivation (and it’s not from lack of searching for ways of doing that).
So is it safe to run AIXI approximations in boxes today?
By code it, do you mean “code, train, or evolve it”?
Note that we dont know much about coding higher level goals in general.
Note that “get things right except where X is concerned” is more complex than “get things right”. Humans do the former because of bias. The less anthropic nature of an .AI might be to our advantage.
IMHO, yes. The computational complexity of AIXItl is such that it can’t be used for anything significant on modern hardware.