In other words, there is no way to program a search for objective morality or for any other search target without the programmer specifying or defining what constitutes a successful conclusion of the search.
If you understand this, then I am wholly at a loss to understand why you think an AI should have “universal” goals or a goal system zero or whatever it is you’re calling it.
If you understand this, then I am wholly at a loss to understand why you think an AI should have “universal” goals or a goal system zero or whatever it is you’re calling it.