I care deeply about many, many people besides just myself (in fact I care about basically everyone on Earth), and it’s simply not realistic to expect that I can convince all of them to sign up for cryonics. That limitation alone makes it clear that focusing solely on cryonics is inadequate if I want to save their lives. I’d much rather support both the acceleration of general technological progress through AI, and cryonics in particular, rather than placing all hope in just one of those approaches.
Furthermore, curing aging would be far superior to merely making cryonics work. The process of aging—growing old, getting sick, and dying—is deeply unpleasant and degrading, even if one assumes a future where cryonic preservation and revival succeed. Avoiding that suffering entirely is vastly more desirable than having to endure it in the first place. Merely signing everyone up for cryonics would be insufficient to address this suffering, whereas I think AI could accelerate medicine and other technologies to greatly enhance human well-being.
The value difference commenters keep pointing out needs to be far bigger than they represent it to be, in order for it to justify increasing existential risk in exchange for some other gain.
I disagree with this assertion. Aging poses a direct, large-scale threat to the lives of billions of people in the coming decades. It doesn’t seem unreasonable to me to suggest that literally saving billions of lives is worth pursuing even if doing so increases existential risk by a tiny amount [ETA: though to be clear, I agree it would appear much more unreasonable if the reduction in existential risk were expected to be very large]. Loosely speaking, this idea only seems unreasonable to those who believe that existential risk is overwhelmingly more important than every other concern by many OOMs—so much so that it renders all other priorities essentially irrelevant. But that’s a fairly unusual and arguably extreme worldview, not an obvious truth.
It doesn’t seem unreasonable to me to suggest that literally saving billions of lives is worth pursuing even if doing so increases existential risk by a tiny amount. Loosely speaking, this idea only seems unreasonable to those who believe that existential risk is overwhelmingly more important than every other concern by many OOMs—so much so that it renders all other priorities essentially irrelevant.
It sounds like you’re talking about multi-decade pauses and imagining that people agree such a pause would only slightly reduce existential risk. But, I think a well timed safety motivated 5 year pause/slowdown (or shorter) is doable and could easily cut risk by a huge amount. (A factor of 2 feels about right to me and I’d be sympathetic to higher: this would massively increase total work on safety.) I don’t think people are imagining that a pause/slowdown makes only a tiny difference!
I’d say that my all considered tradeoff curve is something like 0.1% existential risk per year of delay. This does depend on exogenous risks of societal disruption (e.g. nuclear war, catastrophic pandemics, etc). If we ignore exogenous risks like this and assume the only downside to delay is human deaths, I’d go down to 0.002% personally.[1] (Deaths are like 0.7% of the population per year, making a ~2.5 OOM difference.)
My guess is that the “common sense” values tradeoff is more like 0.1% than 1% because of people caring more about kids and humanity having a future than defeating aging. (This is sensitive to whether AI takeover involves killing people and eliminating even relatively small futures for humanity, but I don’t think this makes more than a 3x difference to the bottom line.) People seem to generally think death isn’t that bad as long as people had a reasonably long healthy life. I disagree, but my disagreements are irrelevant. So, I feel like I’m quite in line with the typical moral perspective in practice.
I edited this number to be a bit lower on further reflection because I realized the relevant consideration pushing higher is putting some weight on something like a common sense ethics intuition and the starting point for this intuition is considerably lower than 0.7%.
I’d say that my all considered tradeoff curve is something like 0.1% existential risk per year of delay
For what it’s worth, from a societal perspective this seems very aggressive to me and a big outlier in human preferences. I would be extremely surprised if any government in the world would currently choose a 0.1% risk of extinction in order to accelerate AGI development by 1 year, if they actually faced that tradeoff directly. My guess is society-endorsed levels are closer to 0.01%.
As far as my views, it’s worth emphasizing that it depends on the current regime. I was supposing that at least the US was taking strong actions to resolve misalignment risk (which is resulting in many years of delay). In this regime, exogenous shocks might alter the situation such that powerful AI is developed under worse goverance. I’d guess the risk of an exogenous shock like this is around ~1% per year and there’s some substantial chance this would greatly increase risk. So, in the regime where the government is seriously considering the tradeoffs and taking strong actions, I’d guess 0.1% is closer to rational (if you don’t have a preference against the development of powerful AI regardless of misalignment risk which might be close to the preference of many people).
I agree that governments in practice wouldn’t eat a known 0.1% existential risk to accelerate AGI development by 1 year, but also governments aren’t taking AGI seriously. Maybe you mean even if they better understood the situation and were acting rationally? I’m not so sure, see e.g. nuclear weapons where governments seemingly eat huge catastrophic risks which seem doable to mitigate at some cost. I do think status quo bias might be important here. Accelerating by 1 year which gets you 0.1% additional risk might be very different than delaying by 1 year which saves you 0.1%.
(Separately, I think existential risk isn’t extinction risk and this might make a factor of 2 difference to the situation if you don’t care at all about anything other than current lives.)
So, in the regime where the government is seriously considering the tradeoffs and taking strong actions, I’d guess 0.1% is closer to rational (if you don’t have a preference against the development of powerful AI regardless of misalignment risk which might be close to the preference of many people).
Ah, sorry, if you are taking into account exogenous shifts in risk-attitudes and how careful people are, from a high baseline, I agree this makes sense. I was reading things as a straightforward 0.1% existential risk vs. 1 year of benefits from AI.
To be clear, I agree there are reasonable values which result in someone thinking accelerating AI now is good and values+beliefs which result in thinking a pause wouldn’t good in likely circumstances.
And I don’t think cryonics makes much of a difference to the bottom line. (I think ultra low cost cryonics might make the cost to save a life ~20x lower than the current marginal cost, which might make interventions in this direction outcompete acceleration even under near maximally pro acceleration views.)
It sounds like you’re talking about multi-decade pauses and imagining that people agree such a pause would only slightly reduce existential risk. But, I think a well timed safety motivated 5 year pause/slowdown (or shorter) is doable and could easily cut risk by a huge amount.
I suspect our core disagreement here primarily stems from differing factual assumptions. Specifically, I doubt that delaying AI development—even if timed well and if the delay were long in duration—would meaningfully reduce existential risk beyond a tiny amount. However, I acknowledge I haven’t said much to justify this claim here. Given this differing factual assumption, pausing AI development seems somewhat difficult to justify from a common-sense moral perspective, and very difficult to justify from a worldview that puts primary importance on people who currently exist.
My guess is that the “common sense” values tradeoff is more like 0.1% than 1% because of people caring more about kids and humanity having a future than defeating aging.
I suspect the common-sense view is closer to 1% than 0.1%, though this partly depends on how we define “common sense” in this context. Personally, I tend to look to revealed preferences as indicators of what people genuinely value. Consider how much individuals typically spend on healthcare and how much society invests in medical research relative to explicit existential risk mitigation efforts. There’s an enormous gap, suggesting society greatly values immediate survival and the well-being of currently living people, and places relatively lower emphasis on abstract, long-term considerations about species survival as a concern separate from presently existing individuals.
Politically, existential risk receives negligible attention compared to conventional concerns impacting currently-existing people. If society placed as much importance on the distant future as you’re suggesting, the US government would likely have much lower debt, and national savings rates would probably be higher. Moreover, if individuals deeply valued the flourishing of humanity independently of the flourishing of current individuals, we probably wouldn’t observe such sharp declines in birth rates globally.
None of these pieces of evidence alone are foolproof indicators that society doesn’t care that much about existential risk, but combined, they paint a picture of our society that’s significantly more short-term focused, and substantially more person-affecting than you’re suggesting here.
Doesn’t the revealed preference argument also imply people don’t care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don’t take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging. I’d guess that a reasonable version of the pure revealed preference view is a bit below the mortality rate of people in their 30s which is 0.25% (in the US). If we halve this (to account for some preference for children etc), we get 0.1%.
(I don’t really feel that sympathetic to using revealed preferences like this. It would also imply lots of strange things. Minimally I don’t think how people typically use the term “common-sense values” maps very well to revealed preference, but this is just a definitions thing.)
Consider how much individuals typically spend on healthcare and how much society invests in medical research relative to explicit existential risk mitigation efforts. There’s an enormous gap, suggesting society greatly values immediate survival and the well-being of currently living people, and places relatively lower emphasis on abstract, long-term considerations about species survival as a concern separate from presently existing individuals.
[...]
Politically, existential risk receives negligible attention compared to conventional concerns impacting currently-existing people. If society placed as much importance on the distant future as you’re suggesting, the US government would likely have much lower debt, and national savings rates would probably be higher. Moreover, if individuals deeply valued the flourishing of humanity independently of the flourishing of current individuals, we probably wouldn’t observe such sharp declines in birth rates globally.
I think you misinterpreted my claims to be about the long run future (and people not being person-affecting etc), while I mostly meant that people don’t care that much about deaths due to older age.
When I said “caring more about kids and humanity having a future than defeating aging”, my claim is that people don’t care that much about deaths from natural causes (particularly aging) and care more about their kids and people being able to continue living for some (not-that-long) period, not that they care about the long run future. By “humanity having a future”, I didn’t mean millions of years from now, I meant their kids being able to grow up and live a normal life and so on for at least several generations.
Note that I said “This is sensitive to whether AI takeover involves killing people and eliminating even relatively small futures for humanity, but I don’t think this makes more than a 3x difference to the bottom line.” (To clarify, I don’t think it makes that big a difference because I think it’s hard to get a expected fatality rate 3x below where I’m putting it.)
Doesn’t the revealed preference argument also imply people don’t care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don’t take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging.
I agree that the amount of funding explicitly designated for anti-aging research is very low, which suggests society doesn’t prioritize curing aging as a social goal. However, I think your overall conclusion is significantly overstated. A very large fraction of conventional medical research specifically targets health and lifespan improvements for older people, even though it isn’t labeled explicitly as “anti-aging.”
Biologically, aging isn’t a single condition but rather the cumulative result of multiple factors and accumulated damage over time. For example, anti-smoking campaigns were essentially efforts to slow aging by reducing damage to smokers’ bodies—particularly their lungs—even though these campaigns were presented primarily as life-saving measures rather than “anti-aging” initiatives. Similarly, society invests a substantial amount of time and resources in mitigating biological damage caused by air pollution and obesity.
Considering this broader understanding of aging, it seems exaggerated to claim that people aren’t very concerned about deaths from old age. I think public concern depends heavily on how the issue is framed. My prediction is that if effective anti-aging therapies became available and proven successful, most people would eagerly purchase them for high sums, and there would be widespread political support to subsidize those technologies.
Right now explicit support for anti-aging research is indeed politically very limited, but that’s partly because robust anti-aging technologies haven’t been clearly demonstrated yet. Medical technologies that have proven effective at slowing aging (even if not labeled as such) have generally been marketed as conventional medical technologies and typically enjoy widespread political support and funding.
I think I mostly agree with your comment and partially update, the absolute revealed caring about older people living longer is substantial.
One way to frame the question is “how much does society care about children and younger adults dying vs people living to 130”. I think people’s stated preferences would be something like 5-10x for the children / younger adults (at least for their children while they are dying of aging) but I don’t think this will clearly show itself in healthcare spending prioritization which is all over the place.
Random other slightly related point: if we’re looking at societal wide revealed preference based on things like spending, then “preservation of the current government power structures” is actually quite substantial and pushes toward society caring more about AIs gaining control (and overthrowing the us government, at least de facto). I don’t think a per person preference utilitarian style view should care much about this to be clear.
Even if ~all that pausing does is delay existential risk by 5 years, isn’t that still totally worth it? If we would otherwise die of AI ten years from now, then a pause creates +50% more value in the future. Of course it’s a far cry from all 1e50 future QALYs we maybe could create, but I’ll take what I can get at this point. And a short-termist view would hold that even more important.
I agree that delaying a pure existential risk that has no potential upside—such as postponing the impact of an asteroid that would otherwise destroy complex life on Earth—would be beneficial. However, the risk posed by AI is fundamentally different from something like an asteroid strike because AI is not just a potential threat: it also carries immense upside potential to improve and save lives. Specifically, advanced AI could dramatically accelerate the pace of scientific and technological progress, including breakthroughs in medicine. I expect this kind of progress would likely extend human lifespans and greatly enhance our quality of life.
Therefore, if we delay the development of AI, we are likely also delaying these life-extending medical advances. As a result, people who are currently alive might die of aging-related causes before these benefits become available. This is a real and immediate issue that affects those we care about today. For instance, if you have elderly relatives whom you love and want to see live longer, healthier lives, then—assuming all else is equal—it makes sense to want rapid medical progress to occur sooner rather than later.
This is not to say that we should accelerate AI recklessly and do it even if that would dramatically increase existential risk. I am just responding to your objection, which was premised on the idea that delaying AI could be worth it even if delaying AI doesn’t reduce x-risk at all.
Presumably, under a common-sense person-affecting view, this doesn’t just depend on the upside and also depends on the absolute level of risk. E.g., suppose that building powerful AI killed 70% of people in expectation and delay had no effect on the ultimate risk. I think a (human-only) person-affecting and common-sense view would delay indefinitely. I’d guess that the point at which a person-affecting common-sense view would delay indefinitely (supposing delay didn’t reduce risk and that we have the current demographic distribution and there wasn’t some global emergency) is around 5-20% expected fatalities, but I’m pretty unsure and it depends on some pretty atypical hypotheticals that don’t come up very much. Typical people are pretty risk averse though, so I wouldn’t be surprised if a real “common-sense” view would go much lower.
(Personally, I’d be unhappy about an indefinite delay even if risk was unavoidably very high because I’m mostly longtermist. A moderate length to save some lives where we eventually get to the future seems good to me, though I’d broadly prefer no delay if delay isn’t improving the situation from the perspective of the long run future.)
I care deeply about many, many people besides just myself (in fact I care about basically everyone on Earth), and it’s simply not realistic to expect that I can convince all of them to sign up for cryonics. That limitation alone makes it clear that focusing solely on cryonics is inadequate if I want to save their lives. I’d much rather support both the acceleration of general technological progress through AI, and cryonics in particular, rather than placing all hope in just one of those approaches.
Furthermore, curing aging would be far superior to merely making cryonics work. The process of aging—growing old, getting sick, and dying—is deeply unpleasant and degrading, even if one assumes a future where cryonic preservation and revival succeed. Avoiding that suffering entirely is vastly more desirable than having to endure it in the first place. Merely signing everyone up for cryonics would be insufficient to address this suffering, whereas I think AI could accelerate medicine and other technologies to greatly enhance human well-being.
I disagree with this assertion. Aging poses a direct, large-scale threat to the lives of billions of people in the coming decades. It doesn’t seem unreasonable to me to suggest that literally saving billions of lives is worth pursuing even if doing so increases existential risk by a tiny amount [ETA: though to be clear, I agree it would appear much more unreasonable if the reduction in existential risk were expected to be very large]. Loosely speaking, this idea only seems unreasonable to those who believe that existential risk is overwhelmingly more important than every other concern by many OOMs—so much so that it renders all other priorities essentially irrelevant. But that’s a fairly unusual and arguably extreme worldview, not an obvious truth.
It sounds like you’re talking about multi-decade pauses and imagining that people agree such a pause would only slightly reduce existential risk. But, I think a well timed safety motivated 5 year pause/slowdown (or shorter) is doable and could easily cut risk by a huge amount. (A factor of 2 feels about right to me and I’d be sympathetic to higher: this would massively increase total work on safety.) I don’t think people are imagining that a pause/slowdown makes only a tiny difference!
I’d say that my all considered tradeoff curve is something like 0.1% existential risk per year of delay. This does depend on exogenous risks of societal disruption (e.g. nuclear war, catastrophic pandemics, etc). If we ignore exogenous risks like this and assume the only downside to delay is human deaths, I’d go down to 0.002% personally.[1] (Deaths are like 0.7% of the population per year, making a ~2.5 OOM difference.)
My guess is that the “common sense” values tradeoff is more like 0.1% than 1% because of people caring more about kids and humanity having a future than defeating aging. (This is sensitive to whether AI takeover involves killing people and eliminating even relatively small futures for humanity, but I don’t think this makes more than a 3x difference to the bottom line.) People seem to generally think death isn’t that bad as long as people had a reasonably long healthy life. I disagree, but my disagreements are irrelevant. So, I feel like I’m quite in line with the typical moral perspective in practice.
I edited this number to be a bit lower on further reflection because I realized the relevant consideration pushing higher is putting some weight on something like a common sense ethics intuition and the starting point for this intuition is considerably lower than 0.7%.
For what it’s worth, from a societal perspective this seems very aggressive to me and a big outlier in human preferences. I would be extremely surprised if any government in the world would currently choose a 0.1% risk of extinction in order to accelerate AGI development by 1 year, if they actually faced that tradeoff directly. My guess is society-endorsed levels are closer to 0.01%.
As far as my views, it’s worth emphasizing that it depends on the current regime. I was supposing that at least the US was taking strong actions to resolve misalignment risk (which is resulting in many years of delay). In this regime, exogenous shocks might alter the situation such that powerful AI is developed under worse goverance. I’d guess the risk of an exogenous shock like this is around ~1% per year and there’s some substantial chance this would greatly increase risk. So, in the regime where the government is seriously considering the tradeoffs and taking strong actions, I’d guess 0.1% is closer to rational (if you don’t have a preference against the development of powerful AI regardless of misalignment risk which might be close to the preference of many people).
I agree that governments in practice wouldn’t eat a known 0.1% existential risk to accelerate AGI development by 1 year, but also governments aren’t taking AGI seriously. Maybe you mean even if they better understood the situation and were acting rationally? I’m not so sure, see e.g. nuclear weapons where governments seemingly eat huge catastrophic risks which seem doable to mitigate at some cost. I do think status quo bias might be important here. Accelerating by 1 year which gets you 0.1% additional risk might be very different than delaying by 1 year which saves you 0.1%.
(Separately, I think existential risk isn’t extinction risk and this might make a factor of 2 difference to the situation if you don’t care at all about anything other than current lives.)
Ah, sorry, if you are taking into account exogenous shifts in risk-attitudes and how careful people are, from a high baseline, I agree this makes sense. I was reading things as a straightforward 0.1% existential risk vs. 1 year of benefits from AI.
Yeah, on the straightforward tradeoff (ignoring exogenous shifts/risks etc), I’m at more like 0.002% on my views.
To be clear, I agree there are reasonable values which result in someone thinking accelerating AI now is good and values+beliefs which result in thinking a pause wouldn’t good in likely circumstances.
And I don’t think cryonics makes much of a difference to the bottom line. (I think ultra low cost cryonics might make the cost to save a life ~20x lower than the current marginal cost, which might make interventions in this direction outcompete acceleration even under near maximally pro acceleration views.)
I suspect our core disagreement here primarily stems from differing factual assumptions. Specifically, I doubt that delaying AI development—even if timed well and if the delay were long in duration—would meaningfully reduce existential risk beyond a tiny amount. However, I acknowledge I haven’t said much to justify this claim here. Given this differing factual assumption, pausing AI development seems somewhat difficult to justify from a common-sense moral perspective, and very difficult to justify from a worldview that puts primary importance on people who currently exist.
I suspect the common-sense view is closer to 1% than 0.1%, though this partly depends on how we define “common sense” in this context. Personally, I tend to look to revealed preferences as indicators of what people genuinely value. Consider how much individuals typically spend on healthcare and how much society invests in medical research relative to explicit existential risk mitigation efforts. There’s an enormous gap, suggesting society greatly values immediate survival and the well-being of currently living people, and places relatively lower emphasis on abstract, long-term considerations about species survival as a concern separate from presently existing individuals.
Politically, existential risk receives negligible attention compared to conventional concerns impacting currently-existing people. If society placed as much importance on the distant future as you’re suggesting, the US government would likely have much lower debt, and national savings rates would probably be higher. Moreover, if individuals deeply valued the flourishing of humanity independently of the flourishing of current individuals, we probably wouldn’t observe such sharp declines in birth rates globally.
None of these pieces of evidence alone are foolproof indicators that society doesn’t care that much about existential risk, but combined, they paint a picture of our society that’s significantly more short-term focused, and substantially more person-affecting than you’re suggesting here.
Doesn’t the revealed preference argument also imply people don’t care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don’t take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging. I’d guess that a reasonable version of the pure revealed preference view is a bit below the mortality rate of people in their 30s which is 0.25% (in the US). If we halve this (to account for some preference for children etc), we get 0.1%.
(I don’t really feel that sympathetic to using revealed preferences like this. It would also imply lots of strange things. Minimally I don’t think how people typically use the term “common-sense values” maps very well to revealed preference, but this is just a definitions thing.)
I think you misinterpreted my claims to be about the long run future (and people not being person-affecting etc), while I mostly meant that people don’t care that much about deaths due to older age.
When I said “caring more about kids and humanity having a future than defeating aging”, my claim is that people don’t care that much about deaths from natural causes (particularly aging) and care more about their kids and people being able to continue living for some (not-that-long) period, not that they care about the long run future. By “humanity having a future”, I didn’t mean millions of years from now, I meant their kids being able to grow up and live a normal life and so on for at least several generations.
Note that I said “This is sensitive to whether AI takeover involves killing people and eliminating even relatively small futures for humanity, but I don’t think this makes more than a 3x difference to the bottom line.” (To clarify, I don’t think it makes that big a difference because I think it’s hard to get a expected fatality rate 3x below where I’m putting it.)
I agree that the amount of funding explicitly designated for anti-aging research is very low, which suggests society doesn’t prioritize curing aging as a social goal. However, I think your overall conclusion is significantly overstated. A very large fraction of conventional medical research specifically targets health and lifespan improvements for older people, even though it isn’t labeled explicitly as “anti-aging.”
Biologically, aging isn’t a single condition but rather the cumulative result of multiple factors and accumulated damage over time. For example, anti-smoking campaigns were essentially efforts to slow aging by reducing damage to smokers’ bodies—particularly their lungs—even though these campaigns were presented primarily as life-saving measures rather than “anti-aging” initiatives. Similarly, society invests a substantial amount of time and resources in mitigating biological damage caused by air pollution and obesity.
Considering this broader understanding of aging, it seems exaggerated to claim that people aren’t very concerned about deaths from old age. I think public concern depends heavily on how the issue is framed. My prediction is that if effective anti-aging therapies became available and proven successful, most people would eagerly purchase them for high sums, and there would be widespread political support to subsidize those technologies.
Right now explicit support for anti-aging research is indeed politically very limited, but that’s partly because robust anti-aging technologies haven’t been clearly demonstrated yet. Medical technologies that have proven effective at slowing aging (even if not labeled as such) have generally been marketed as conventional medical technologies and typically enjoy widespread political support and funding.
I think I mostly agree with your comment and partially update, the absolute revealed caring about older people living longer is substantial.
One way to frame the question is “how much does society care about children and younger adults dying vs people living to 130”. I think people’s stated preferences would be something like 5-10x for the children / younger adults (at least for their children while they are dying of aging) but I don’t think this will clearly show itself in healthcare spending prioritization which is all over the place.
Random other slightly related point: if we’re looking at societal wide revealed preference based on things like spending, then “preservation of the current government power structures” is actually quite substantial and pushes toward society caring more about AIs gaining control (and overthrowing the us government, at least de facto). I don’t think a per person preference utilitarian style view should care much about this to be clear.
Even if ~all that pausing does is delay existential risk by 5 years, isn’t that still totally worth it? If we would otherwise die of AI ten years from now, then a pause creates +50% more value in the future. Of course it’s a far cry from all 1e50 future QALYs we maybe could create, but I’ll take what I can get at this point. And a short-termist view would hold that even more important.
I agree that delaying a pure existential risk that has no potential upside—such as postponing the impact of an asteroid that would otherwise destroy complex life on Earth—would be beneficial. However, the risk posed by AI is fundamentally different from something like an asteroid strike because AI is not just a potential threat: it also carries immense upside potential to improve and save lives. Specifically, advanced AI could dramatically accelerate the pace of scientific and technological progress, including breakthroughs in medicine. I expect this kind of progress would likely extend human lifespans and greatly enhance our quality of life.
Therefore, if we delay the development of AI, we are likely also delaying these life-extending medical advances. As a result, people who are currently alive might die of aging-related causes before these benefits become available. This is a real and immediate issue that affects those we care about today. For instance, if you have elderly relatives whom you love and want to see live longer, healthier lives, then—assuming all else is equal—it makes sense to want rapid medical progress to occur sooner rather than later.
This is not to say that we should accelerate AI recklessly and do it even if that would dramatically increase existential risk. I am just responding to your objection, which was premised on the idea that delaying AI could be worth it even if delaying AI doesn’t reduce x-risk at all.
Presumably, under a common-sense person-affecting view, this doesn’t just depend on the upside and also depends on the absolute level of risk. E.g., suppose that building powerful AI killed 70% of people in expectation and delay had no effect on the ultimate risk. I think a (human-only) person-affecting and common-sense view would delay indefinitely. I’d guess that the point at which a person-affecting common-sense view would delay indefinitely (supposing delay didn’t reduce risk and that we have the current demographic distribution and there wasn’t some global emergency) is around 5-20% expected fatalities, but I’m pretty unsure and it depends on some pretty atypical hypotheticals that don’t come up very much. Typical people are pretty risk averse though, so I wouldn’t be surprised if a real “common-sense” view would go much lower.
(Personally, I’d be unhappy about an indefinite delay even if risk was unavoidably very high because I’m mostly longtermist. A moderate length to save some lives where we eventually get to the future seems good to me, though I’d broadly prefer no delay if delay isn’t improving the situation from the perspective of the long run future.)