I agree that delaying a pure existential risk that has no potential upside—such as postponing the impact of an asteroid that would otherwise destroy complex life on Earth—would be beneficial. However, the risk posed by AI is fundamentally different from something like an asteroid strike because AI is not just a potential threat: it also carries immense upside potential to improve and save lives. Specifically, advanced AI could dramatically accelerate the pace of scientific and technological progress, including breakthroughs in medicine. I expect this kind of progress would likely extend human lifespans and greatly enhance our quality of life.
Therefore, if we delay the development of AI, we are likely also delaying these life-extending medical advances. As a result, people who are currently alive might die of aging-related causes before these benefits become available. This is a real and immediate issue that affects those we care about today. For instance, if you have elderly relatives whom you love and want to see live longer, healthier lives, then—assuming all else is equal—it makes sense to want rapid medical progress to occur sooner rather than later.
This is not to say that we should accelerate AI recklessly and do it even if that would dramatically increase existential risk. I am just responding to your objection, which was premised on the idea that delaying AI could be worth it even if delaying AI doesn’t reduce x-risk at all.
Presumably, under a common-sense person-affecting view, this doesn’t just depend on the upside and also depends on the absolute level of risk. E.g., suppose that building powerful AI killed 70% of people in expectation and delay had no effect on the ultimate risk. I think a (human-only) person-affecting and common-sense view would delay indefinitely. I’d guess that the point at which a person-affecting common-sense view would delay indefinitely (supposing delay didn’t reduce risk and that we have the current demographic distribution and there wasn’t some global emergency) is around 5-20% expected fatalities, but I’m pretty unsure and it depends on some pretty atypical hypotheticals that don’t come up very much. Typical people are pretty risk averse though, so I wouldn’t be surprised if a real “common-sense” view would go much lower.
(Personally, I’d be unhappy about an indefinite delay even if risk was unavoidably very high because I’m mostly longtermist. A moderate length to save some lives where we eventually get to the future seems good to me, though I’d broadly prefer no delay if delay isn’t improving the situation from the perspective of the long run future.)
I agree that delaying a pure existential risk that has no potential upside—such as postponing the impact of an asteroid that would otherwise destroy complex life on Earth—would be beneficial. However, the risk posed by AI is fundamentally different from something like an asteroid strike because AI is not just a potential threat: it also carries immense upside potential to improve and save lives. Specifically, advanced AI could dramatically accelerate the pace of scientific and technological progress, including breakthroughs in medicine. I expect this kind of progress would likely extend human lifespans and greatly enhance our quality of life.
Therefore, if we delay the development of AI, we are likely also delaying these life-extending medical advances. As a result, people who are currently alive might die of aging-related causes before these benefits become available. This is a real and immediate issue that affects those we care about today. For instance, if you have elderly relatives whom you love and want to see live longer, healthier lives, then—assuming all else is equal—it makes sense to want rapid medical progress to occur sooner rather than later.
This is not to say that we should accelerate AI recklessly and do it even if that would dramatically increase existential risk. I am just responding to your objection, which was premised on the idea that delaying AI could be worth it even if delaying AI doesn’t reduce x-risk at all.
Presumably, under a common-sense person-affecting view, this doesn’t just depend on the upside and also depends on the absolute level of risk. E.g., suppose that building powerful AI killed 70% of people in expectation and delay had no effect on the ultimate risk. I think a (human-only) person-affecting and common-sense view would delay indefinitely. I’d guess that the point at which a person-affecting common-sense view would delay indefinitely (supposing delay didn’t reduce risk and that we have the current demographic distribution and there wasn’t some global emergency) is around 5-20% expected fatalities, but I’m pretty unsure and it depends on some pretty atypical hypotheticals that don’t come up very much. Typical people are pretty risk averse though, so I wouldn’t be surprised if a real “common-sense” view would go much lower.
(Personally, I’d be unhappy about an indefinite delay even if risk was unavoidably very high because I’m mostly longtermist. A moderate length to save some lives where we eventually get to the future seems good to me, though I’d broadly prefer no delay if delay isn’t improving the situation from the perspective of the long run future.)