In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be “moral”.
A good counter to this argument would be to find a culture with morals strongly opposed to our own, and demonstrate that it is logical and internally consistent. My inability to think of such a culture could be interpreted as evidence that a sufficiently-powerful AI would be moral. But I think it’s more likely that the morals we agree on are properties common to most moral frameworks that are workable in our particular biological and technological circumstances. You should be able to demonstrate that an AI need not be moral by our standards, by writing a story that takes place in a world with technology and biology different enough so that our morals are substandard. But nobody would publish it.
A good counter to this argument would be to find a culture with morals strongly opposed to our own, and demonstrate that it is logical and internally consistent. My inability to think of such a culture could be interpreted as evidence that a sufficiently-powerful AI would be moral. But I think it’s more likely that the morals we agree on are properties common to most moral frameworks that are workable in our particular biological and technological circumstances. You should be able to demonstrate that an AI need not be moral by our standards, by writing a story that takes place in a world with technology and biology different enough so that our morals are substandard. But nobody would publish it.