It sounds like you’re talking about multi-decade pauses and imagining that people agree such a pause would only slightly reduce existential risk. But, I think a well timed safety motivated 5 year pause/slowdown (or shorter) is doable and could easily cut risk by a huge amount.
I suspect our core disagreement here primarily stems from differing factual assumptions. Specifically, I doubt that delaying AI development—even if timed well and if the delay were long in duration—would meaningfully reduce existential risk beyond a tiny amount. However, I acknowledge I haven’t said much to justify this claim here. Given this differing factual assumption, pausing AI development seems somewhat difficult to justify from a common-sense moral perspective, and very difficult to justify from a worldview that puts primary importance on people who currently exist.
My guess is that the “common sense” values tradeoff is more like 0.1% than 1% because of people caring more about kids and humanity having a future than defeating aging.
I suspect the common-sense view is closer to 1% than 0.1%, though this partly depends on how we define “common sense” in this context. Personally, I tend to look to revealed preferences as indicators of what people genuinely value. Consider how much individuals typically spend on healthcare and how much society invests in medical research relative to explicit existential risk mitigation efforts. There’s an enormous gap, suggesting society greatly values immediate survival and the well-being of currently living people, and places relatively lower emphasis on abstract, long-term considerations about species survival as a concern separate from presently existing individuals.
Politically, existential risk receives negligible attention compared to conventional concerns impacting currently-existing people. If society placed as much importance on the distant future as you’re suggesting, the US government would likely have much lower debt, and national savings rates would probably be higher. Moreover, if individuals deeply valued the flourishing of humanity independently of the flourishing of current individuals, we probably wouldn’t observe such sharp declines in birth rates globally.
None of these pieces of evidence alone are foolproof indicators that society doesn’t care that much about existential risk, but combined, they paint a picture of our society that’s significantly more short-term focused, and substantially more person-affecting than you’re suggesting here.
Doesn’t the revealed preference argument also imply people don’t care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don’t take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging. I’d guess that a reasonable version of the pure revealed preference view is a bit below the mortality rate of people in their 30s which is 0.25% (in the US). If we halve this (to account for some preference for children etc), we get 0.1%.
(I don’t really feel that sympathetic to using revealed preferences like this. It would also imply lots of strange things. Minimally I don’t think how people typically use the term “common-sense values” maps very well to revealed preference, but this is just a definitions thing.)
Consider how much individuals typically spend on healthcare and how much society invests in medical research relative to explicit existential risk mitigation efforts. There’s an enormous gap, suggesting society greatly values immediate survival and the well-being of currently living people, and places relatively lower emphasis on abstract, long-term considerations about species survival as a concern separate from presently existing individuals.
[...]
Politically, existential risk receives negligible attention compared to conventional concerns impacting currently-existing people. If society placed as much importance on the distant future as you’re suggesting, the US government would likely have much lower debt, and national savings rates would probably be higher. Moreover, if individuals deeply valued the flourishing of humanity independently of the flourishing of current individuals, we probably wouldn’t observe such sharp declines in birth rates globally.
I think you misinterpreted my claims to be about the long run future (and people not being person-affecting etc), while I mostly meant that people don’t care that much about deaths due to older age.
When I said “caring more about kids and humanity having a future than defeating aging”, my claim is that people don’t care that much about deaths from natural causes (particularly aging) and care more about their kids and people being able to continue living for some (not-that-long) period, not that they care about the long run future. By “humanity having a future”, I didn’t mean millions of years from now, I meant their kids being able to grow up and live a normal life and so on for at least several generations.
Note that I said “This is sensitive to whether AI takeover involves killing people and eliminating even relatively small futures for humanity, but I don’t think this makes more than a 3x difference to the bottom line.” (To clarify, I don’t think it makes that big a difference because I think it’s hard to get a expected fatality rate 3x below where I’m putting it.)
Doesn’t the revealed preference argument also imply people don’t care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don’t take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging.
I agree that the amount of funding explicitly designated for anti-aging research is very low, which suggests society doesn’t prioritize curing aging as a social goal. However, I think your overall conclusion is significantly overstated. A very large fraction of conventional medical research specifically targets health and lifespan improvements for older people, even though it isn’t labeled explicitly as “anti-aging.”
Biologically, aging isn’t a single condition but rather the cumulative result of multiple factors and accumulated damage over time. For example, anti-smoking campaigns were essentially efforts to slow aging by reducing damage to smokers’ bodies—particularly their lungs—even though these campaigns were presented primarily as life-saving measures rather than “anti-aging” initiatives. Similarly, society invests a substantial amount of time and resources in mitigating biological damage caused by air pollution and obesity.
Considering this broader understanding of aging, it seems exaggerated to claim that people aren’t very concerned about deaths from old age. I think public concern depends heavily on how the issue is framed. My prediction is that if effective anti-aging therapies became available and proven successful, most people would eagerly purchase them for high sums, and there would be widespread political support to subsidize those technologies.
Right now explicit support for anti-aging research is indeed politically very limited, but that’s partly because robust anti-aging technologies haven’t been clearly demonstrated yet. Medical technologies that have proven effective at slowing aging (even if not labeled as such) have generally been marketed as conventional medical technologies and typically enjoy widespread political support and funding.
I think I mostly agree with your comment and partially update, the absolute revealed caring about older people living longer is substantial.
One way to frame the question is “how much does society care about children and younger adults dying vs people living to 130”. I think people’s stated preferences would be something like 5-10x for the children / younger adults (at least for their children while they are dying of aging) but I don’t think this will clearly show itself in healthcare spending prioritization which is all over the place.
Random other slightly related point: if we’re looking at societal wide revealed preference based on things like spending, then “preservation of the current government power structures” is actually quite substantial and pushes toward society caring more about AIs gaining control (and overthrowing the us government, at least de facto). I don’t think a per person preference utilitarian style view should care much about this to be clear.
Even if ~all that pausing does is delay existential risk by 5 years, isn’t that still totally worth it? If we would otherwise die of AI ten years from now, then a pause creates +50% more value in the future. Of course it’s a far cry from all 1e50 future QALYs we maybe could create, but I’ll take what I can get at this point. And a short-termist view would hold that even more important.
I agree that delaying a pure existential risk that has no potential upside—such as postponing the impact of an asteroid that would otherwise destroy complex life on Earth—would be beneficial. However, the risk posed by AI is fundamentally different from something like an asteroid strike because AI is not just a potential threat: it also carries immense upside potential to improve and save lives. Specifically, advanced AI could dramatically accelerate the pace of scientific and technological progress, including breakthroughs in medicine. I expect this kind of progress would likely extend human lifespans and greatly enhance our quality of life.
Therefore, if we delay the development of AI, we are likely also delaying these life-extending medical advances. As a result, people who are currently alive might die of aging-related causes before these benefits become available. This is a real and immediate issue that affects those we care about today. For instance, if you have elderly relatives whom you love and want to see live longer, healthier lives, then—assuming all else is equal—it makes sense to want rapid medical progress to occur sooner rather than later.
This is not to say that we should accelerate AI recklessly and do it even if that would dramatically increase existential risk. I am just responding to your objection, which was premised on the idea that delaying AI could be worth it even if delaying AI doesn’t reduce x-risk at all.
Presumably, under a common-sense person-affecting view, this doesn’t just depend on the upside and also depends on the absolute level of risk. E.g., suppose that building powerful AI killed 70% of people in expectation and delay had no effect on the ultimate risk. I think a (human-only) person-affecting and common-sense view would delay indefinitely. I’d guess that the point at which a person-affecting common-sense view would delay indefinitely (supposing delay didn’t reduce risk and that we have the current demographic distribution and there wasn’t some global emergency) is around 5-20% expected fatalities, but I’m pretty unsure and it depends on some pretty atypical hypotheticals that don’t come up very much. Typical people are pretty risk averse though, so I wouldn’t be surprised if a real “common-sense” view would go much lower.
(Personally, I’d be unhappy about an indefinite delay even if risk was unavoidably very high because I’m mostly longtermist. A moderate length to save some lives where we eventually get to the future seems good to me, though I’d broadly prefer no delay if delay isn’t improving the situation from the perspective of the long run future.)
I suspect our core disagreement here primarily stems from differing factual assumptions. Specifically, I doubt that delaying AI development—even if timed well and if the delay were long in duration—would meaningfully reduce existential risk beyond a tiny amount. However, I acknowledge I haven’t said much to justify this claim here. Given this differing factual assumption, pausing AI development seems somewhat difficult to justify from a common-sense moral perspective, and very difficult to justify from a worldview that puts primary importance on people who currently exist.
I suspect the common-sense view is closer to 1% than 0.1%, though this partly depends on how we define “common sense” in this context. Personally, I tend to look to revealed preferences as indicators of what people genuinely value. Consider how much individuals typically spend on healthcare and how much society invests in medical research relative to explicit existential risk mitigation efforts. There’s an enormous gap, suggesting society greatly values immediate survival and the well-being of currently living people, and places relatively lower emphasis on abstract, long-term considerations about species survival as a concern separate from presently existing individuals.
Politically, existential risk receives negligible attention compared to conventional concerns impacting currently-existing people. If society placed as much importance on the distant future as you’re suggesting, the US government would likely have much lower debt, and national savings rates would probably be higher. Moreover, if individuals deeply valued the flourishing of humanity independently of the flourishing of current individuals, we probably wouldn’t observe such sharp declines in birth rates globally.
None of these pieces of evidence alone are foolproof indicators that society doesn’t care that much about existential risk, but combined, they paint a picture of our society that’s significantly more short-term focused, and substantially more person-affecting than you’re suggesting here.
Doesn’t the revealed preference argument also imply people don’t care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don’t take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging. I’d guess that a reasonable version of the pure revealed preference view is a bit below the mortality rate of people in their 30s which is 0.25% (in the US). If we halve this (to account for some preference for children etc), we get 0.1%.
(I don’t really feel that sympathetic to using revealed preferences like this. It would also imply lots of strange things. Minimally I don’t think how people typically use the term “common-sense values” maps very well to revealed preference, but this is just a definitions thing.)
I think you misinterpreted my claims to be about the long run future (and people not being person-affecting etc), while I mostly meant that people don’t care that much about deaths due to older age.
When I said “caring more about kids and humanity having a future than defeating aging”, my claim is that people don’t care that much about deaths from natural causes (particularly aging) and care more about their kids and people being able to continue living for some (not-that-long) period, not that they care about the long run future. By “humanity having a future”, I didn’t mean millions of years from now, I meant their kids being able to grow up and live a normal life and so on for at least several generations.
Note that I said “This is sensitive to whether AI takeover involves killing people and eliminating even relatively small futures for humanity, but I don’t think this makes more than a 3x difference to the bottom line.” (To clarify, I don’t think it makes that big a difference because I think it’s hard to get a expected fatality rate 3x below where I’m putting it.)
I agree that the amount of funding explicitly designated for anti-aging research is very low, which suggests society doesn’t prioritize curing aging as a social goal. However, I think your overall conclusion is significantly overstated. A very large fraction of conventional medical research specifically targets health and lifespan improvements for older people, even though it isn’t labeled explicitly as “anti-aging.”
Biologically, aging isn’t a single condition but rather the cumulative result of multiple factors and accumulated damage over time. For example, anti-smoking campaigns were essentially efforts to slow aging by reducing damage to smokers’ bodies—particularly their lungs—even though these campaigns were presented primarily as life-saving measures rather than “anti-aging” initiatives. Similarly, society invests a substantial amount of time and resources in mitigating biological damage caused by air pollution and obesity.
Considering this broader understanding of aging, it seems exaggerated to claim that people aren’t very concerned about deaths from old age. I think public concern depends heavily on how the issue is framed. My prediction is that if effective anti-aging therapies became available and proven successful, most people would eagerly purchase them for high sums, and there would be widespread political support to subsidize those technologies.
Right now explicit support for anti-aging research is indeed politically very limited, but that’s partly because robust anti-aging technologies haven’t been clearly demonstrated yet. Medical technologies that have proven effective at slowing aging (even if not labeled as such) have generally been marketed as conventional medical technologies and typically enjoy widespread political support and funding.
I think I mostly agree with your comment and partially update, the absolute revealed caring about older people living longer is substantial.
One way to frame the question is “how much does society care about children and younger adults dying vs people living to 130”. I think people’s stated preferences would be something like 5-10x for the children / younger adults (at least for their children while they are dying of aging) but I don’t think this will clearly show itself in healthcare spending prioritization which is all over the place.
Random other slightly related point: if we’re looking at societal wide revealed preference based on things like spending, then “preservation of the current government power structures” is actually quite substantial and pushes toward society caring more about AIs gaining control (and overthrowing the us government, at least de facto). I don’t think a per person preference utilitarian style view should care much about this to be clear.
Even if ~all that pausing does is delay existential risk by 5 years, isn’t that still totally worth it? If we would otherwise die of AI ten years from now, then a pause creates +50% more value in the future. Of course it’s a far cry from all 1e50 future QALYs we maybe could create, but I’ll take what I can get at this point. And a short-termist view would hold that even more important.
I agree that delaying a pure existential risk that has no potential upside—such as postponing the impact of an asteroid that would otherwise destroy complex life on Earth—would be beneficial. However, the risk posed by AI is fundamentally different from something like an asteroid strike because AI is not just a potential threat: it also carries immense upside potential to improve and save lives. Specifically, advanced AI could dramatically accelerate the pace of scientific and technological progress, including breakthroughs in medicine. I expect this kind of progress would likely extend human lifespans and greatly enhance our quality of life.
Therefore, if we delay the development of AI, we are likely also delaying these life-extending medical advances. As a result, people who are currently alive might die of aging-related causes before these benefits become available. This is a real and immediate issue that affects those we care about today. For instance, if you have elderly relatives whom you love and want to see live longer, healthier lives, then—assuming all else is equal—it makes sense to want rapid medical progress to occur sooner rather than later.
This is not to say that we should accelerate AI recklessly and do it even if that would dramatically increase existential risk. I am just responding to your objection, which was premised on the idea that delaying AI could be worth it even if delaying AI doesn’t reduce x-risk at all.
Presumably, under a common-sense person-affecting view, this doesn’t just depend on the upside and also depends on the absolute level of risk. E.g., suppose that building powerful AI killed 70% of people in expectation and delay had no effect on the ultimate risk. I think a (human-only) person-affecting and common-sense view would delay indefinitely. I’d guess that the point at which a person-affecting common-sense view would delay indefinitely (supposing delay didn’t reduce risk and that we have the current demographic distribution and there wasn’t some global emergency) is around 5-20% expected fatalities, but I’m pretty unsure and it depends on some pretty atypical hypotheticals that don’t come up very much. Typical people are pretty risk averse though, so I wouldn’t be surprised if a real “common-sense” view would go much lower.
(Personally, I’d be unhappy about an indefinite delay even if risk was unavoidably very high because I’m mostly longtermist. A moderate length to save some lives where we eventually get to the future seems good to me, though I’d broadly prefer no delay if delay isn’t improving the situation from the perspective of the long run future.)