In other words, there is no way to program a search for objective morality or for any other search target without the programmer specifying or defining what constitutes a successful conclusion of the search.
If you understand this, then I am wholly at a loss to understand why you think an AI should have “universal” goals or a goal system zero or whatever it is you’re calling it.
The flip answer is that the AI must have some goal system (and the designer of the AI must choose it). The community contains vocal egoists, like Peter Voss, Hopefully Anonymous, maybe Denis Bider. They want the AI to help them achieve their egoistic ends. Are you less at a loss to understand them than me?
The flip answer is that the AI must have some goal system (and the designer of the AI must choose it). The community contains vocal egoists, like Peter Voss, Hopefully Anonymous, maybe Denis Bider. They want the AI to help them achieve their egoistic ends. Are you less at a loss to understand them than me?