When someone reinvents the Oracle AI, the most common opening remark runs like this:
To which the reply is that the AI needs goals in order to decide how to think: that is, the AI has to act as a powerful optimization process in order to plan its acquisition of knowledge, effectively distill sensory information, pluck “answers” to particular questions out of the space of all possible responses, and of course, to improve its own source code up to the level where the AI is a powerful intelligence.
Is that where the problem is? Is having goals a problem per se?
All these events are “improbable” relative to random organizations of the AI’s RAM, so the AI has to hit a narrow target in the space of possibilities to make superintelligent answers come out.
Is that where the problem is? Why would it be? AI design isnt a random shot in the dark. If a tool AI has the goals of answering questions correctly, and doing nothing else, where is the remaining problem?
Is that where the problem is? Is having goals a problem per se?
Is that where the problem is? Why would it be? AI design isnt a random shot in the dark. If a tool AI has the goals of answering questions correctly, and doing nothing else, where is the remaining problem?