social consequences aside, is it morally correct to kill one person to create a million people who would not have otherwise existed?
How would a world in which it is morally correct to kill one person in order to create a million people look different than a world in which this is not the case?
Define “moral” as referring to human ethics, whatever those may be. Define “Friendly” as meaning “does the best possible thing according to human ethics, whatever those may be.” Define AI as superintelligence. Any Friendly AI, by these definitions, would behave differently depending on whether X is “moral”.
How would a world in which it is morally correct to kill one person in order to create a million people look different than a world in which this is not the case?
Friendly AIs would behave differently, for one thing.
You may have to be a bit more specific. What in the FAI’s code would look different between world 1 and world 2?
Define “moral” as referring to human ethics, whatever those may be. Define “Friendly” as meaning “does the best possible thing according to human ethics, whatever those may be.” Define AI as superintelligence. Any Friendly AI, by these definitions, would behave differently depending on whether X is “moral”.
Does that answer your question?