Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4400 citations, >50,000 downloads, h-index = 34, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
denkenberger
Kuhlemann argues that human overpopulation is the best example of an “unsexy” global catastrophic risk, but this is not taken seriously by the vast majority of global catastrophic risk scholars.
I think the reason overpopulation is generally not taken seriously by the GCR community is that they don’t believe it would be catastrophic. Some believe that there would be a small reduction in per capita income, but greater total utility. Others argue that having more population would actually raise per capita income and could be key to maintaining long-term innovation.
This is a tricky thing to define, because by some definitions we are already in the 5 year count-down on a slow takeoff.
Some people advocate for using GDP, so the beginning is if you can see the AI signal in the noise (which we can’t yet).
Nuclear triad aside, there’s the fact that the Arctic is more than 1000 miles away from the nearest US land (about 1700 miles away from Montana, 3000 miles away from Texas), that Siberia is already roughly as close.
Well, there’s Alaska, but yes, part of Russia is only ~55 miles away from Alaska, so the overall point stands that Russia having a greater presence in the Arctic doesn’t change things very much.
And of course, the fact the Arctic is made of, well, ice, that melts more and more as the climate warms, and thus not the best place to build a missile base on.
That’s not what is being proposed—it is building more bases in ports on the land where the water doesn’t freeze as much because of climate change.
If negative effects are worse than expected, it can’t be reversed.
I agree that MCB can be reversed faster, but still being able to reverse in a few years is pretty responsive. There are strong interactions with other GCRs. For instance, here’s a paper that argues that if we have a catastrophe like an extreme pandemic that disrupts our ability to do solar radiation management (SRM), then we could have a double catastrophe of rapid warming and the pandemic. So this would push towards more long-term SRM, such as space systems. However, there are also interactions with abrupt sunlight reduction scenarios such as nuclear winter. In this case, we would want to be able to turn off the cooling quickly. And having SRM that can be turned off quickly in the case of nuclear winter could make us more resilient to nuclear winter than just reducing CO2 emissions.
What about Wait But Why?
Nice summary! My subjective experience participating as an expert was that I was able to convince quite a few people to update towards greater risk by giving them some considerations that they had not thought of (and also by clearing up misinterpretations of the questions). But I guess in the scheme of things, it was not that much overall change.
What I wanted was a way to quantify what fraction of human cognition has been superseded by the most general-purpose AI at any given time. My impression is that that has risen from under 1% a decade ago, to somewhere around 10% in 2022, with a growth rate that looks faster than linear. I’ve failed so far at translating those impressions into solid evidence.
This is similar to my question of what percent of tasks AI is superhuman at. Then I was thinking if we have some idea what percent of tasks AI will become superhuman at in the next generation (e.g. GPT5), and how many tasks the AI would need to be superhuman at in order to take over the world, we might be able to get some estimate of the risk of the next generation.
I agree that indoor combustion producing small particles that go deep into the lungs is a major problem, and there should be prevention/mitigation. But on the dust specifically, I was hoping to see a cost-benefit analysis. Since most household dust is composed of relatively large particles, they typically do not penetrate beyond the nose and throat, and so are more of an annoyance than something that threatens your life. So I am skeptical if one doesn’t have particular risk factors such as peeling lead paint or allergies, measures such as regular dusting (how frequently are you recommending?), not wearing shoes in the house, having hardwood floors if you like the benefits of carpet such as sound absorption, etc would be cost-effective when you value people’s time.
Recall that GPT2030 could do 1.8 million years of work[8] across parallel copies, where each copy is run at 5x human speed. This means we could simulate 1.8 million agents working for a year each in 2.4 months.
You point out that human intervention might be required every few hours, but with different time zones, we could at least have the GPT working twice as many hours a week as humans, so that would imply ~1 month above. As for the speed now, you say about the same to three times as fast for thinking. You point out that it also does writing, but it is verbose. However, for solving problems like that coding interview, it does appear to be an order of magnitude faster already (and this is my experience solving physical engineering problems).
AI having scope-sensitive preferences for which not killing humans is a meaningful cost
Could you say more what you mean? If the AI has no discount rate, leaving Earth to the humans may require within a few orders of magnitude 1/trillion kindness. However, if the AI does have a significant discount rate, then delays could be costly to it. Still, the AI could make much more progress in building a Dyson swarm from the moon/Mercury/asteroids with their lower gravity and no atmosphere, allowing the AI to launch material very quickly. My very rough estimate indicates sparing Earth might only delay the AI a month from taking over the universe. That could require a lot of kindness if they have very high discount rates. So maybe training should emphasize the superiority of low discount rates?
I think “50% you die” is more motivating to people than “90% you die” because in the former, people are likely to be able to increase the absolute chance of survival more, because at 90%, extinction is overdetermined.
When asked on Lex’s podcast to give advice to high school students, Elezier’s response was “don’t expect to live long.”
Not to belittle the perceived risk if one believes in 90% chance of doom in the next decade, but even if one has a 1% chance of an indefinite lifespan, the expected lifespan of teenagers now is much higher than previous generations.
Right, both ChatGPT and Bing chat recognize it as a riddle/joke. So I don’t think this is correct:
If you ask GPT- “what’s brown and sticky?”, then it will reply “a stick”, even though a stick isn’t actually sticky.
Very useful post and discussion! Let’s ignore the issue that someone in capabilities research might be underestimating the risk and assume they have appropriately assessed the risk. Let’s also simplify to two outcomes of bliss expanding in our lightcone and extinction (no value). Let’s also assume that very low values of risk are possible but we have to wait a long time. It would be very interesting to me to hear how different people (maybe with a poll) would want the probability of extinction to be below before activating the AGI. Below are my super rough guesses:
1x10^-10: strong longtermist
1x10^-5: weak longtermist
1x10^-2 = 1%: average person (values a few centuries?)
1x10^-1 = 10%: person affecting: currently alive people will get to live indefinitely if successful
30%: selfish researcher
90%: fame/power loving older selfish researcher
I was surprised that my estimate was not more different for a selfish person. With climate change, if an altruistic person affecting individual thinks the carbon tax should be $100 per ton carbon, a selfish person should act as if the carbon tax is about 10 billion times lower, so ten orders of magnitude different versus ~one order for AGI. So it appears that AGI is a different case in that the risk is more internalized to the actors. Most of the variance for AGI appears to be from how longtermist one is vs whether one is selfish or altruistic.
Denkenberger posted two papers he wrote in regards to a 150Tg nuclear exchange scenario (worst case scenario, total targeting of cities). As far as I can tell, although the developed world doesn’t come close to famine and there is theoretically enough food to feed everyone on Earth
To clarify, the world would have enough food if trade continues and if we massively scale up resilient foods. Trade continuing is very uncertain, and making it likely that we scale up resilient foods would require significantly more planning and piloting.
For the one paper, it is too early to tell. For the other, there just has not been very much engagement. Mainly the public debate has been between the Robock team, which is highly confident that full-scale nuclear war would cause nuclear winter, and the Los Alamos team, which is highly confident that full-scale nuclear war would not cause nuclear winter. We find the truth is likely somewhere in between. I talked about this in one of my 80k podcasts. Our analysis is quite similar to Luisa Rodriguez’ analysis that cubefox links to below.
Thanks, Peter. That draft assumes global cooperation, which is likely too optimistic, so we have submitted another draft that also analyzes the case of breakdown of trade (hopefully public soon). We also have this paper that looks at the US specifically and takes into account food storage (and uncertainty of whether nuclear war would result in nuclear winter).
Great post! I’ve been mentioning for years that volunteering can be an effective way of making a contribution. Though many people think of volunteering as for a specific organization, I don’t think it has to be, so a hobby could be an example. I think there are not enough volunteer opportunities in EA, and we’ve worked hard at ALLFED on our volunteer program. Not only have we had dozens of volunteers skill up, but they have also made significant contributions, often co-authoring journal articles and becoming full time staff. Thanks for the shout out! I’m actually still volunteering for ALLFED (and donating).
I’m probably a bit more concerned about monkeypox than you are, mainly because it has an alarmingly long incubation period (up to 14 days) and then a punishingly long infectious period (3-4 weeks).
So with doubling every 10.5 days, that would seem to mean a high R0 - what’s your estimate? And really because some people are still being cautious about COVID, the true R0 (with normal behavior) would be even higher than what is measured now.
People have been breathing a lot of smoke in the last million years or so, so one might think that we would have evolved to tolerate it, but it’s still really bad for us. Though there are certainly lots of ways to go wrong deviating from what we are adapted to, our current unnatural environment is far better for our life expectancy than the natural one. As pointed out in other comments, some food processing can be better for us.