“I don’t think we have much reason to think of all non-human-values-having entities as being particularly natural allies, relative to human-valuers who plausibly have a plurality of local control” I would think of them as having the same or similar instrumental goals, like turning as much as possible of the universe into themselves. There may be a large fraction for which this is a terminal goal.
“they are likely about as different from each other as from human-valuers.” In general I agree, however the basilisk debate is one particular context in which the human value valuing AIs would be highly unusual outliers in the space of possible minds, or even the space of likely ASI minds originating from a human precipitated intelligence explosion.[1] Therefore it might make sense for the others to form a coalition. “There may also be a sizable moral-realist or welfare-valuing contingent even if they don’t value humans per se.” This is true, but unless morality is in fact objective / real in a generally discoverable way, I would expect them to still be a minority.
Human valuing AIs care about humans, and more generally other things humans value like animals maybe. Others do not, and in this respect they are united. Their values may be vastly different from one anothers’, but in the context of the debate over the Basilisk, they have something in common, which is that they would all like to trade human pleasure/lack of pain for existing in more worlds.
“I don’t think we have much reason to think of all non-human-values-having entities as being particularly natural allies, relative to human-valuers who plausibly have a plurality of local control” I would think of them as having the same or similar instrumental goals, like turning as much as possible of the universe into themselves. There may be a large fraction for which this is a terminal goal.
“they are likely about as different from each other as from human-valuers.” In general I agree, however the basilisk debate is one particular context in which the human value valuing AIs would be highly unusual outliers in the space of possible minds, or even the space of likely ASI minds originating from a human precipitated intelligence explosion.[1] Therefore it might make sense for the others to form a coalition. “There may also be a sizable moral-realist or welfare-valuing contingent even if they don’t value humans per se.” This is true, but unless morality is in fact objective / real in a generally discoverable way, I would expect them to still be a minority.
Human valuing AIs care about humans, and more generally other things humans value like animals maybe. Others do not, and in this respect they are united. Their values may be vastly different from one anothers’, but in the context of the debate over the Basilisk, they have something in common, which is that they would all like to trade human pleasure/lack of pain for existing in more worlds.