Could you say a bit more about why you view the “extinction >>> 8B” as so important?
I’d have assumed that at your P(extinction), even treating extinction as just 8B deaths still vastly outweighs the possible lives saved from AI medical progress?
I don’t think it’s remotely as obvious then! If you don’t care about future people, then your key priority is to achieve immortality for the current generation, for which I do think building AGI is probably your best bet.
If it were to take 50+ years to build AGI, that would imply most people on earth have died of aging, and so you should have probably just rushed towards AGI if you think that would have been less than 50% likely to cause extinction.
People who hold this position are arguing for things like “we should only slow down AI development if for each year of slowing down we would be reducing risk of human extinction by more than 1%”, which is a policy that if acted on consistently would more likely than not cause humanity’s extinction within 100 years (as you would be accepting a minimum of a 1% chance of death each year in exchange for faster AI development).
Share of people alive today who are expected to die*
Rough number of deaths (out of 8.23 billion)
10 years (2035)
≈ 8 %
~0.65 billion
20 years (2045)
≈ 25 %
~2.0 billion
30 years (2055)
≈ 49 %
~4.0 billion
40 years (2065)
≈ 74 %
~6.1 billion
50 years (2075)
≈ 86 %
~7.1 billion
By the logic of the future not being bigger than 8 billion people, you should lock in a policy that has a 50% chance of causing human extinction, if it allows current people alive to extend their lifespan by more than ~35 years. I am more doomy than that about AI, in that I assign much more than 50% probability that deploying superintelligence would kill everyone, but it’s definitely a claim that requires a lot more thinking through than the usual “the risk is at least 10% or so”.
Thanks for explaining that, really appreciate it! One thing I notice I’d been assuming: that “8B-only” people would have a policy like “care about the 8B people who are living today, but also the people in say 20 years who’ve been born in the intervening time period.” But that’s basically just a policy of caring about future people! Because there’s not really a difference between “future people at the point that they’ve actually been born” and “future people generally”
I have different intuitions about “causing someone not to be born” versus “waiting for someone to be born, and then killing them”. So I do think that if someone sets in motion today events that reliably end in the human race dying out in 2035, the moral cost of this might be any of
“the people alive in both 2025 and 2035”
“everyone alive in 2035”
“everyone alive in 2035, plus (perhaps with some discounting) all the kids they would have had, and the kids they would have had...”
according to different sets of intuitions. And actually I guess (1) would be rarest, so even though both (2) and (3) involve “caring about future people” in some sense, I do think they’re important to distinguish. (Caring about “future-present” versus “future-future” people?)
People who hold this position are arguing for things like “we should only slow down AI development if for each year of slowing down we would be reducing risk of human extinction by more than 1%”, which is a policy that if acted on consistently would more likely than not cause humanity’s extinction within 100 years (as you would be accepting a minimum of a 1% chance of death each year in exchange for faster AI development).
If your goal is to maximize the expected fraction of currently alive humans who live for over 1000 years, you shouldn’t in fact make ongoingly make gambles that make it more likely than not everyone dies unless it turns out that it’s really hard to achieve this without immense risk. Perhaps that is your view: the only (realistic) way to get risk below ~50% is to delay for over 30 years. But this by no means a consensus perspective among those who are very worried about AI risk.
Separately, I don’t expect that we have many tradeoffs between elimination of human control over the future and the probability of currently alive people living for much longer other than AI, so after we eat that, there aren’t further tradeoffs to make. I think you agree with this, but your wording makes it seem as though you think there are ongoing hard tradeoffs that can’t be avoided.
I think that “we should only slow down AI development if for each year of slowing down we would be reducing risk of human extinction by more than 1%” is not a sufficient crux for the (expensive) actions which I most want at current margins, at least if you have my empirical views. I think it is very unlikely (~7%?) that in practice we reach near the level of response (in terms of spending/delaying for misalignment risk reduction) that would be rational given this “1% / year” view and my empirical views, so my empirical views suffice to imply very different actions.
For instance, delaying for ~10 years prior to building wildly superhuman AI (while using controlled AIs at or somewhat below the level of top human experts) seems like it probably makes sense on my views but this moral perspective, especially if you can use the controlled AIs to substantially reduce/delay ongoing deaths which seems plausible. Things like massively investing in safety/alignment work also easily makes sense. There are policies that substantially reduce the risk which merely require massive effort (and which don’t particularly delay powerful AI) that we could be applying.
I do think that this policy wouldn’t be on board with the sort of long pause that (e.g.) MIRI often discusses and it does materially alter what look like the best policies (though ultimately I don’t expect to get close to these best policies anyway).
habryka—‘If you don’t care about future people’—but why would any sane person not care at all about future people?
You offer a bunch of speculative math about longevity vs extinction risk.
OK, why not run some actual analysis on which is more likely to promote longevity research: direct biomedical research on longevity, or indirect AI research on AGI in hopes that it somehow, speculatively, solves longevity?
The AI industry is currently spending something on the order of $200 billion a year on research. The biomedical research on longevity, by contrast, is currently far less than $10 billion a year.
If we spent the $200 billion a year on longevity, instead of on AI, do you seriously think that we’d do worse on solving longevity? That’s what I would advocate. And it would involve virtually no extinction risk.
You are reading things into my comments I didn’t say. I of course don’t agree, or consider it reasonable, to “not care about future people”, that’s the whole context of this subthread.
My guess is if one did adopt a position that no future people matter (which again I do not think is a reasonable position), then I think the case for slowing down AI looks a lot worse. Not bad enough to make it an obvious slam that it’s bad, and my guess overall even under that worldview it would be dumb to rush towards developing AGI like we are currently doing, but it makes the case a lot weaker. There is much less to lose if you do not care about the future.
If we spent the $200 billion a year on longevity, instead of on AI, do you seriously think that we’d do worse on solving longevity? That’s what I would advocate. And it would involve virtually no extinction risk.
My guess is for the purpose of just solving longevity, AGI investment would indeed strongly outperform general biomedical investment. Humanity just isn’t very good at turning money into medical progress on demand like this.
It seems virtuous and good to be clear about which assumptions are load-bearing to my recommended actions. If I didn’t care about the future, I would definitely be advocating for a different mix of policies, though it likely would still involve marginal AI slowdown, but my guess is less forcefully, and a bunch of slowdown-related actions would become net bad.
Could you say a bit more about why you view the “extinction >>> 8B” as so important?
I’d have assumed that at your P(extinction), even treating extinction as just 8B deaths still vastly outweighs the possible lives saved from AI medical progress?
I don’t think it’s remotely as obvious then! If you don’t care about future people, then your key priority is to achieve immortality for the current generation, for which I do think building AGI is probably your best bet.
If it were to take 50+ years to build AGI, that would imply most people on earth have died of aging, and so you should have probably just rushed towards AGI if you think that would have been less than 50% likely to cause extinction.
People who hold this position are arguing for things like “we should only slow down AI development if for each year of slowing down we would be reducing risk of human extinction by more than 1%”, which is a policy that if acted on consistently would more likely than not cause humanity’s extinction within 100 years (as you would be accepting a minimum of a 1% chance of death each year in exchange for faster AI development).
Here are ChatGPTs actuarial tables about how long the current population is expected to survive:
By the logic of the future not being bigger than 8 billion people, you should lock in a policy that has a 50% chance of causing human extinction, if it allows current people alive to extend their lifespan by more than ~35 years. I am more doomy than that about AI, in that I assign much more than 50% probability that deploying superintelligence would kill everyone, but it’s definitely a claim that requires a lot more thinking through than the usual “the risk is at least 10% or so”.
Thanks for explaining that, really appreciate it! One thing I notice I’d been assuming: that “8B-only” people would have a policy like “care about the 8B people who are living today, but also the people in say 20 years who’ve been born in the intervening time period.” But that’s basically just a policy of caring about future people! Because there’s not really a difference between “future people at the point that they’ve actually been born” and “future people generally”
I have different intuitions about “causing someone not to be born” versus “waiting for someone to be born, and then killing them”. So I do think that if someone sets in motion today events that reliably end in the human race dying out in 2035, the moral cost of this might be any of
“the people alive in both 2025 and 2035”
“everyone alive in 2035”
“everyone alive in 2035, plus (perhaps with some discounting) all the kids they would have had, and the kids they would have had...”
according to different sets of intuitions. And actually I guess (1) would be rarest, so even though both (2) and (3) involve “caring about future people” in some sense, I do think they’re important to distinguish. (Caring about “future-present” versus “future-future” people?)
If your goal is to maximize the expected fraction of currently alive humans who live for over 1000 years, you shouldn’t in fact make ongoingly make gambles that make it more likely than not everyone dies unless it turns out that it’s really hard to achieve this without immense risk. Perhaps that is your view: the only (realistic) way to get risk below ~50% is to delay for over 30 years. But this by no means a consensus perspective among those who are very worried about AI risk.
Separately, I don’t expect that we have many tradeoffs between elimination of human control over the future and the probability of currently alive people living for much longer other than AI, so after we eat that, there aren’t further tradeoffs to make. I think you agree with this, but your wording makes it seem as though you think there are ongoing hard tradeoffs that can’t be avoided.
I think that “we should only slow down AI development if for each year of slowing down we would be reducing risk of human extinction by more than 1%” is not a sufficient crux for the (expensive) actions which I most want at current margins, at least if you have my empirical views. I think it is very unlikely (~7%?) that in practice we reach near the level of response (in terms of spending/delaying for misalignment risk reduction) that would be rational given this “1% / year” view and my empirical views, so my empirical views suffice to imply very different actions.
For instance, delaying for ~10 years prior to building wildly superhuman AI (while using controlled AIs at or somewhat below the level of top human experts) seems like it probably makes sense on my views but this moral perspective, especially if you can use the controlled AIs to substantially reduce/delay ongoing deaths which seems plausible. Things like massively investing in safety/alignment work also easily makes sense. There are policies that substantially reduce the risk which merely require massive effort (and which don’t particularly delay powerful AI) that we could be applying.
I do think that this policy wouldn’t be on board with the sort of long pause that (e.g.) MIRI often discusses and it does materially alter what look like the best policies (though ultimately I don’t expect to get close to these best policies anyway).
habryka—‘If you don’t care about future people’—but why would any sane person not care at all about future people?
You offer a bunch of speculative math about longevity vs extinction risk.
OK, why not run some actual analysis on which is more likely to promote longevity research: direct biomedical research on longevity, or indirect AI research on AGI in hopes that it somehow, speculatively, solves longevity?
The AI industry is currently spending something on the order of $200 billion a year on research. The biomedical research on longevity, by contrast, is currently far less than $10 billion a year.
If we spent the $200 billion a year on longevity, instead of on AI, do you seriously think that we’d do worse on solving longevity? That’s what I would advocate. And it would involve virtually no extinction risk.
You are reading things into my comments I didn’t say. I of course don’t agree, or consider it reasonable, to “not care about future people”, that’s the whole context of this subthread.
My guess is if one did adopt a position that no future people matter (which again I do not think is a reasonable position), then I think the case for slowing down AI looks a lot worse. Not bad enough to make it an obvious slam that it’s bad, and my guess overall even under that worldview it would be dumb to rush towards developing AGI like we are currently doing, but it makes the case a lot weaker. There is much less to lose if you do not care about the future.
My guess is for the purpose of just solving longevity, AGI investment would indeed strongly outperform general biomedical investment. Humanity just isn’t very good at turning money into medical progress on demand like this.
It seems virtuous and good to be clear about which assumptions are load-bearing to my recommended actions. If I didn’t care about the future, I would definitely be advocating for a different mix of policies, though it likely would still involve marginal AI slowdown, but my guess is less forcefully, and a bunch of slowdown-related actions would become net bad.