Define “moral” as referring to human ethics, whatever those may be. Define “Friendly” as meaning “does the best possible thing according to human ethics, whatever those may be.” Define AI as superintelligence. Any Friendly AI, by these definitions, would behave differently depending on whether X is “moral”.
You may have to be a bit more specific. What in the FAI’s code would look different between world 1 and world 2?
Define “moral” as referring to human ethics, whatever those may be. Define “Friendly” as meaning “does the best possible thing according to human ethics, whatever those may be.” Define AI as superintelligence. Any Friendly AI, by these definitions, would behave differently depending on whether X is “moral”.
Does that answer your question?