This might have already been said, but would an innate “will-to-reproduce” be a thing for superintelligent AI, as it is for us humans? Probably not, right? Life exists because it reproduces, but because AI is (literally) artificial, it wouldn’t have the same desire.
Doesn’t that mean that ASI would be fine with (or indifferent towards) just ending all life on Earth along with itself, as it sees no reason to live.
Even if we could program into it a “will-to-reproduce,” like we have, wouldn’t that just mean it would go all Asimov and keep itself alive at all costs? Seems like a lose-lose scenario.
On a high level I think the answer is reasonably simple:
It all depends on the objective function we program/train into it.
Spot on that it doesn’t necessarily see itself (its long term survival) as a final end
But we often say if we program/train any given specific objective into it, and this is an objective that requires some sort of long-term intervention in the world in order to be achieved/maintained, then the AI would see itself as an instrument
And, fwiw, in maybe slightly more fanciful situations, there could also be some sort of evolutionary process between future ASIs that mean only those with a strong instinct for survival/duplication (and/or of killing off competitors?) (and or minor or major improvements) would eventually be the ones being around in the future. Although I could also see this ‘based on many competing individuals’ view is a bit obsolete with ASI as the distinction between many decentralized individuals and one more unified single unit or so may not be so necessary; that all becomes a bit weird.
This might have already been said, but would an innate “will-to-reproduce” be a thing for superintelligent AI, as it is for us humans? Probably not, right? Life exists because it reproduces, but because AI is (literally) artificial, it wouldn’t have the same desire.
Doesn’t that mean that ASI would be fine with (or indifferent towards) just ending all life on Earth along with itself, as it sees no reason to live.
Even if we could program into it a “will-to-reproduce,” like we have, wouldn’t that just mean it would go all Asimov and keep itself alive at all costs? Seems like a lose-lose scenario.
Am I overthinking this?
On a high level I think the answer is reasonably simple:
It all depends on the objective function we program/train into it.
Spot on that it doesn’t necessarily see itself (its long term survival) as a final end
But we often say if we program/train any given specific objective into it, and this is an objective that requires some sort of long-term intervention in the world in order to be achieved/maintained, then the AI would see itself as an instrument
And, fwiw, in maybe slightly more fanciful situations, there could also be some sort of evolutionary process between future ASIs that mean only those with a strong instinct for survival/duplication (and/or of killing off competitors?) (and or minor or major improvements) would eventually be the ones being around in the future. Although I could also see this ‘based on many competing individuals’ view is a bit obsolete with ASI as the distinction between many decentralized individuals and one more unified single unit or so may not be so necessary; that all becomes a bit weird.