is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.
So there’s an argument here, one I don’t subscribe to, but I have seen prominent AI experts make it implicitly.
If you think about it, if you have children, and they have children, and so in a series of mortal generations, with each n+1 generation more and more of your genetic distinctiveness is being lost. Language and culture will evolve as well.
This is the ‘value drift’ argument. That whatever you value now, as in yourself and those humans you know and your culture and language and various forms of identity, as each year passes, a percentage of that value is going to be lost. Value is being discounted with time.
It will eventually diminish to 0 as long as humans are dying from aging.
You might argue that the people in 300+ years will at least share genetics with the people now, but that is not necessarily true since genetic editing will be available and bespoke biology where all the prior rules of what’s possible are thrown out.
So you are comparing outcome A, where hundreds of years from now the alien cyborgs descended from people now exist, vs the outcome B, where hundreds of years from now, descendents of some AI are all that exist.
“value” wise you could argue that A == B, both have negligible value compared to what we value today.
I’m not sure this argument is correct but it does discount away the future and is a strong argument against long termism.
Value drift only potential stops once immortal beings exist, and AIs are immortal from the very first version. Theoretically some AI system that was trained on all of human knowledge, even if it goes on to kill it’s creators and consume the universe, need not forget any of that knowledge. It also as an individual would know more human skills and knowledge and culture than any human ever could, so in a way such a being is a human++.
The AI expert who expressed this is near the end of his expected lifespan, and there’s no difference from an individual perspective who is about to die between “cyborg” distant descendents and pure robots.
So there’s an argument here, one I don’t subscribe to, but I have seen prominent AI experts make it implicitly.
If you think about it, if you have children, and they have children, and so in a series of mortal generations, with each n+1 generation more and more of your genetic distinctiveness is being lost. Language and culture will evolve as well.
This is the ‘value drift’ argument. That whatever you value now, as in yourself and those humans you know and your culture and language and various forms of identity, as each year passes, a percentage of that value is going to be lost. Value is being discounted with time.
It will eventually diminish to 0 as long as humans are dying from aging.
You might argue that the people in 300+ years will at least share genetics with the people now, but that is not necessarily true since genetic editing will be available and bespoke biology where all the prior rules of what’s possible are thrown out.
So you are comparing outcome A, where hundreds of years from now the alien cyborgs descended from people now exist, vs the outcome B, where hundreds of years from now, descendents of some AI are all that exist.
“value” wise you could argue that A == B, both have negligible value compared to what we value today.
I’m not sure this argument is correct but it does discount away the future and is a strong argument against long termism.
Value drift only potential stops once immortal beings exist, and AIs are immortal from the very first version. Theoretically some AI system that was trained on all of human knowledge, even if it goes on to kill it’s creators and consume the universe, need not forget any of that knowledge. It also as an individual would know more human skills and knowledge and culture than any human ever could, so in a way such a being is a human++.
The AI expert who expressed this is near the end of his expected lifespan, and there’s no difference from an individual perspective who is about to die between “cyborg” distant descendents and pure robots.