I used to work with hospice patients, and typically the ones who were the least worried and most at peace were those who had most radically accepted the inevitable. The post you’ve written in response to read like healthy processing of grief to me, and someone trying to come to terms with a bleak outlook. To tell them essentially “it’s fine, the experts got this” feels disingenuous and like a recipe for denialism. When that paternalistic attitude dominates, then business as usual reigns often to catastrophic ends. Despite feeling like we don’t have control over the AI outcome broadly, we do have control over many aspects of our lives that are impacted by AI, and it’s reasonable to make decisions one way or another in those areas contingent on one’s P-doom (eg prioritizing family over career short term). There’s a reason in medicine people should be told the good and the bad about all options, and be given expectations before they decide on a course of treatment, instead of just leaving things to the experts.
As I wrote above, I think the hospice analogy is very off the mark. I think the risk of nuclear war is closer to that, but is also not a good analogy, in the sense that nuclear war was always a zero/one thing—it either happens or it doesn’t, and if it doesn’t you do not feel it at all.
With AI, people already are and will definitely feel it, for both good and bad. I just think the most likely outcome is that the good will be much more than the bad.
it either happens or it doesn’t, and if it doesn’t you do not feel it at all.
What? Nuclear was is very centrally the kind of thing that really matters how you prepare for it. It was always extremely unlikely to be an existential risk, and even relatively simple precautions would drastically increase the likelihood you would survive.
even relatively simple precautions would drastically increase the likelihood you would survive
This seems wrong to me, for most people, in the event of a prolonged supply chain collapse, which seems a likely consequence of large-scale nuclear war. It could be true given significant probability on either a limited nuclear exchange, or quick recovery of supply chains after a large war.
Huh, why? Even a full-scale nuclear exchange would have little effect on most food production in the US, which seems like it’s the only actual critical part. There are some countries that would be having serious issues here, but for the US, most sufficient food supply chains really aren’t that long, and you already by-default have on the order of 6 months to a year of local stockpiles (and this is one of the things you could easily increase to 2-3 years at relatively little cost). It would be actively surprising to me if food supply chains don’t recover within 2-3 years.
Most exposition of existential risk I have seen count nuclear war as an example of a risk. Bostrom (2001) certainly considers nuclear war as an existential risk. What I meant by it either happens or doesn’t is that since 1945 there has been no nuclear weapon in war, so the average person “did not feel it” and given the U.S. and Russian posture, it is quite possible that a usage by one of them against the other will lead to a total nuclear war.
Also while it was possible to take precautions, like a fallout shelter, the plan to build fallout shelters for most U.S. citizens fizzled and was defunded in the 1970s. So I think it is fair to say that most Americans and Russians did not spend most of their time thinking or actively preparing for nuclear holocaust.
I am not necessarily saying it was the right thing: maybe the fallout shelters should not have been defunded, and should have been built, and people should have advocated for that. But I think it would still have been wise for them to try to live their daily lives without been gripped by fear.
(Also Bostrom’s coverage is really quite tentative, saying “An all-out nuclear war was a possibility with both a substantial probability and with consequences that might have been persistent enough to qualify as global and terminal. There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or permanently destroy human civilization”)
Given the probabilities involved it does seem to me like we vastly vastly underinvested in nuclear recovery efforts (in substantial parts because of this dumb “either it doesn’t happen or we all die” mentality).
To be clear, this is importantly different from my models of AI risk, which really does have much more of that nature as far as I can tell.
Comparing nuclear risks to AI is a bit unfair—the reason we can give such details calculations of kinetic force etc.. is because nuclear warheads are real, actually deployed, and can be launched at a moment’s notice. With ASI you cannot do calculations of exactly how many people it would kill precisely because it does not exist.
I am not advocating that policy makers should have taken an “either it doesn’t happen or we all die” mentality for nuclear policy. (While this is not my field, I did do some work in the nuclear disarmament space.)
But I would say that this was (and is) the mindset for the typical person living in an American urban center. (If you live in such an area, you can go to nukemap and see what would be the impact of one or more ~500kt warheads- of the type carried by Russian R-36 missiles—in your vicinity.)
People have been living their lives under the threat that it is possible that they and everyone they know could be extinguished in a moment’s notice. I think the ordinary U.S. and Russian citizen probably should have done more and care more to promote nuclear disarmament. But I don’t think they (we) should live in constant state of fear either.
I used to work with hospice patients, and typically the ones who were the least worried and most at peace were those who had most radically accepted the inevitable. The post you’ve written in response to read like healthy processing of grief to me, and someone trying to come to terms with a bleak outlook. To tell them essentially “it’s fine, the experts got this” feels disingenuous and like a recipe for denialism. When that paternalistic attitude dominates, then business as usual reigns often to catastrophic ends. Despite feeling like we don’t have control over the AI outcome broadly, we do have control over many aspects of our lives that are impacted by AI, and it’s reasonable to make decisions one way or another in those areas contingent on one’s P-doom (eg prioritizing family over career short term). There’s a reason in medicine people should be told the good and the bad about all options, and be given expectations before they decide on a course of treatment, instead of just leaving things to the experts.
As I wrote above, I think the hospice analogy is very off the mark. I think the risk of nuclear war is closer to that, but is also not a good analogy, in the sense that nuclear war was always a zero/one thing—it either happens or it doesn’t, and if it doesn’t you do not feel it at all.
With AI, people already are and will definitely feel it, for both good and bad. I just think the most likely outcome is that the good will be much more than the bad.
What? Nuclear was is very centrally the kind of thing that really matters how you prepare for it. It was always extremely unlikely to be an existential risk, and even relatively simple precautions would drastically increase the likelihood you would survive.
(Probably tangential but:)
This seems wrong to me, for most people, in the event of a prolonged supply chain collapse, which seems a likely consequence of large-scale nuclear war. It could be true given significant probability on either a limited nuclear exchange, or quick recovery of supply chains after a large war.
Huh, why? Even a full-scale nuclear exchange would have little effect on most food production in the US, which seems like it’s the only actual critical part. There are some countries that would be having serious issues here, but for the US, most sufficient food supply chains really aren’t that long, and you already by-default have on the order of 6 months to a year of local stockpiles (and this is one of the things you could easily increase to 2-3 years at relatively little cost). It would be actively surprising to me if food supply chains don’t recover within 2-3 years.
I take it you think nuclear winter is unlikely?
Also, how are you going to get the phosphate fertilizer to grow crops after a nuclear war?
Most exposition of existential risk I have seen count nuclear war as an example of a risk. Bostrom (2001) certainly considers nuclear war as an existential risk. What I meant by it either happens or doesn’t is that since 1945 there has been no nuclear weapon in war, so the average person “did not feel it” and given the U.S. and Russian posture, it is quite possible that a usage by one of them against the other will lead to a total nuclear war.
Also while it was possible to take precautions, like a fallout shelter, the plan to build fallout shelters for most U.S. citizens fizzled and was defunded in the 1970s. So I think it is fair to say that most Americans and Russians did not spend most of their time thinking or actively preparing for nuclear holocaust.
I am not necessarily saying it was the right thing: maybe the fallout shelters should not have been defunded, and should have been built, and people should have advocated for that. But I think it would still have been wise for them to try to live their daily lives without been gripped by fear.
Sure, though that coverage has turned out to be wrong, so it’s still a bad example. See also: https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction
(Also Bostrom’s coverage is really quite tentative, saying “An all-out nuclear war was a possibility with both a substantial probability and with consequences that might have been persistent enough to qualify as global and terminal. There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or permanently destroy human civilization”)
Given the probabilities involved it does seem to me like we vastly vastly underinvested in nuclear recovery efforts (in substantial parts because of this dumb “either it doesn’t happen or we all die” mentality).
To be clear, this is importantly different from my models of AI risk, which really does have much more of that nature as far as I can tell.
Comparing nuclear risks to AI is a bit unfair—the reason we can give such details calculations of kinetic force etc.. is because nuclear warheads are real, actually deployed, and can be launched at a moment’s notice. With ASI you cannot do calculations of exactly how many people it would kill precisely because it does not exist.
I am not advocating that policy makers should have taken an “either it doesn’t happen or we all die” mentality for nuclear policy. (While this is not my field, I did do some work in the nuclear disarmament space.)
But I would say that this was (and is) the mindset for the typical person living in an American urban center. (If you live in such an area, you can go to nukemap and see what would be the impact of one or more ~500kt warheads- of the type carried by Russian R-36 missiles—in your vicinity.)
People have been living their lives under the threat that it is possible that they and everyone they know could be extinguished in a moment’s notice. I think the ordinary U.S. and Russian citizen probably should have done more and care more to promote nuclear disarmament. But I don’t think they (we) should live in constant state of fear either.