In that case, AI risk becomes similar to aging risk – it will kill me and my friends and relatives. The only difference is the value of future generations.
Extinction-level AI risk kills future generations, but mundane AI risk (eg. ubiquitous drone clouds and only some people survive in bunkers) still assume existence of future generations. Mundane AI risk also does not require superintelligence.
In that case, AI risk becomes similar to aging risk – it will kill me and my friends and relatives. The only difference is the value of future generations.
The casualness with which you throw out this comment seems to validate my assertion that “AI risk” and “risk of a misaligned AI destroying humanity” have become nearly conflated because of what, from the outside, appears like an incidental idiosyncrasy, longtermism, that initially attracted people to the study of AI alignment.
Part of the asymmetry that I’m trying to get acknowledgement of is subjective (or, if you prefer, due to differing utility functions). For most people “aging risk” is not even a thing but “I, my friends, and relatives all being killed” very much is. This is not a philosophical argument, it’s a fact about fundamental values. And fundamental differences in values, especially between large majorities and empowered minorities, are a very big deal.
My point was that if I assume that aging and death are bad – then I personally strive to live indefinitely long, and I wish that other people will do. In that case, longtermism becomes personal issue unrelated to future generations: I only can live billions of years if civilization will exist billions of years.
In other words, if there is no aging and death, there is no ’future generations” in a sense that they exist after my death.
Moreover, if AI risk is real, than AI is a powerful thing and it can solve the problem of aging and death. Anyone surviving until AI will be either instantly dead or practically immortal. In that case, “future generation after my death” is un-applicable.
All that will not happen if AI get stuck half-way to superintelligence. There will be no immortality, but a lot of drone warfare. In other words, to be mundane risk, AI has to have mundane capability limit. We don’t know for now, will it.
Well, it doesn’t sound like I misunderstood you so far, but just so I’m clear, are you not also saying that people ought to favor being annihilated by a small number of people controlling an aligned (to them) AGI that also grants them immortality over dying naturally with no immortality-granting AGI ever being developed? Perhaps even that this is an obviously correct position?
Surely, I am against currently living people being annihilated. If superintelligent AI will be created but doesn’t provide immortality and resurrection for ALL people ever lived, it is misaliged AI in my opinion.
I asked Sonnet to ELI5 you comment and it said:
Option 1: A small group of people controls a very powerful AI that does what they want. This AI might give those people immortality (living forever), but it might also destroy or control everyone else.
Option 2: No super-powerful AI gets built at all, so people just live and die naturally like we do now.
But being equally against both requires a positive program to prevent Option 1 other than the default of halting technological development that can lead to it (and thus taking Option 2, or a delay in immortality because human research is slower)! Conversely, without committing to finding such a program, pursuing the avoidance of Option 2 is an implicit acceptance of Option 1. Are you committing to this search? And if it fails, which option will you choose?
In that case, AI risk becomes similar to aging risk – it will kill me and my friends and relatives. The only difference is the value of future generations.
Extinction-level AI risk kills future generations, but mundane AI risk (eg. ubiquitous drone clouds and only some people survive in bunkers) still assume existence of future generations. Mundane AI risk also does not require superintelligence.
I wrote on similar topics in https://philpapers.org/rec/TURCOG-2
and here https://philpapers.org/rec/TURCSW
The casualness with which you throw out this comment seems to validate my assertion that “AI risk” and “risk of a misaligned AI destroying humanity” have become nearly conflated because of what, from the outside, appears like an incidental idiosyncrasy, longtermism, that initially attracted people to the study of AI alignment.
Part of the asymmetry that I’m trying to get acknowledgement of is subjective (or, if you prefer, due to differing utility functions). For most people “aging risk” is not even a thing but “I, my friends, and relatives all being killed” very much is. This is not a philosophical argument, it’s a fact about fundamental values. And fundamental differences in values, especially between large majorities and empowered minorities, are a very big deal.
My point was that if I assume that aging and death are bad – then I personally strive to live indefinitely long, and I wish that other people will do. In that case, longtermism becomes personal issue unrelated to future generations: I only can live billions of years if civilization will exist billions of years.
In other words, if there is no aging and death, there is no ’future generations” in a sense that they exist after my death.
Moreover, if AI risk is real, than AI is a powerful thing and it can solve the problem of aging and death. Anyone surviving until AI will be either instantly dead or practically immortal. In that case, “future generation after my death” is un-applicable.
All that will not happen if AI get stuck half-way to superintelligence. There will be no immortality, but a lot of drone warfare. In other words, to be mundane risk, AI has to have mundane capability limit. We don’t know for now, will it.
Well, it doesn’t sound like I misunderstood you so far, but just so I’m clear, are you not also saying that people ought to favor being annihilated by a small number of people controlling an aligned (to them) AGI that also grants them immortality over dying naturally with no immortality-granting AGI ever being developed? Perhaps even that this is an obviously correct position?
Surely, I am against currently living people being annihilated. If superintelligent AI will be created but doesn’t provide immortality and resurrection for ALL people ever lived, it is misaliged AI in my opinion.
I asked Sonnet to ELI5 you comment and it said:
Option 1: A small group of people controls a very powerful AI that does what they want. This AI might give those people immortality (living forever), but it might also destroy or control everyone else.
Option 2: No super-powerful AI gets built at all, so people just live and die naturally like we do now.
Both outcomes are bad in my opinion.
But being equally against both requires a positive program to prevent Option 1 other than the default of halting technological development that can lead to it (and thus taking Option 2, or a delay in immortality because human research is slower)! Conversely, without committing to finding such a program, pursuing the avoidance of Option 2 is an implicit acceptance of Option 1. Are you committing to this search? And if it fails, which option will you choose?
Option 3: Benevolent AI cares about values and immortality of all people who ever lived