A tendency to become corrupt when placed into positions of power is a feature of some minds.
Morality in the human universe is a compromise between conflicting wills. The compromise is useful because the alternative is conflict, and conflict is wasteful. Law is a specific instance of this, so let us look at property rights: property rights is a decision-making procedure for deciding between conflicting desires concerning the owned object. There really is no point in even having property rights except in the context of the potential for conflict. Remove conflict, and you remove the raison d’etre of property rights, and more generally the raison d’etre of law, and more generally the raison d’etre of morality. Give a person power, and he no longer needs to compromise with others, and so for him the raison d’etre of morality vanishes and he acts as he pleases.
The feature of human minds that renders morality necessary is the possibility that humans can have preferences that conflict with the preferences of other humans, thereby requiring a decisionmaking procedure for deciding whose will prevails. Preference is, furthermore, revealed in the actions taken by a mind, so a mind that acts has preferences. So all the above is applicable to an artificial intelligence if the artificial intelligence acts.
What makes you think a human-designed AI would be vulnerable to this kind of corruption?
I am assuming it acts, and therefore makes choices, and therefore has preferences, and therefore can have preferences which conflict with the preferences of other minds (including human minds).
A tendency to become corrupt when placed into positions of power is a feature of some minds.
Morality in the human universe is a compromise between conflicting wills. The compromise is useful because the alternative is conflict, and conflict is wasteful. Law is a specific instance of this, so let us look at property rights: property rights is a decision-making procedure for deciding between conflicting desires concerning the owned object. There really is no point in even having property rights except in the context of the potential for conflict. Remove conflict, and you remove the raison d’etre of property rights, and more generally the raison d’etre of law, and more generally the raison d’etre of morality. Give a person power, and he no longer needs to compromise with others, and so for him the raison d’etre of morality vanishes and he acts as he pleases.
The feature of human minds that renders morality necessary is the possibility that humans can have preferences that conflict with the preferences of other humans, thereby requiring a decisionmaking procedure for deciding whose will prevails. Preference is, furthermore, revealed in the actions taken by a mind, so a mind that acts has preferences. So all the above is applicable to an artificial intelligence if the artificial intelligence acts.
What makes you think a human-designed AI would be vulnerable to this kind of corruption?
I am assuming it acts, and therefore makes choices, and therefore has preferences, and therefore can have preferences which conflict with the preferences of other minds (including human minds).