People are friendly to dogs they assume to be friendly, and hostile to dogs they expect to be hostile.
Likewise if an AI has self-preservation as a higher value than the preservation of other life, it will eradicate other life that it’ll expect to be hostile to it.
“People do X, therefore AIs will do X” is not a valid argument for AIs in general. It may apply to AIs that value self-preservation over the preservation of other life, but we shouldn’t make one of those. Also, what Benelliot said.
“People do X, therefore AIs will do X” is not a valid argument for AIs in general. It may apply to AIs that value self-preservation over the preservation of other life, but we shouldn’t make one of those. Also, what Benelliot said.