I suspect that there are really very few preferences or goals that are inherent to humans and not actively developed from deeper principles. Without an ability to construct new preference systems—and inhibit them—humans would be deprived of so much of their flexibility as to be helpless. The rare cases where humans lose the ability to meaningfully inhibit basic preferences are usually viewed as pathological, like drug users becoming totally obsessed with getting their next fix (and indifferent to everything else).
The most basic preferences for a rational AI would be the criteria for rationality itself. How would an irrational collection of preferences survive in such an entity? You’d have to cripple the properties essential to its operation.
I suspect that there are really very few preferences or goals that are inherent to humans and not actively developed from deeper principles. Without an ability to construct new preference systems—and inhibit them—humans would be deprived of so much of their flexibility as to be helpless. The rare cases where humans lose the ability to meaningfully inhibit basic preferences are usually viewed as pathological, like drug users becoming totally obsessed with getting their next fix (and indifferent to everything else).
The most basic preferences for a rational AI would be the criteria for rationality itself. How would an irrational collection of preferences survive in such an entity? You’d have to cripple the properties essential to its operation.