Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University and University College London.
denkenberger
Similarly, I have now said my peace about this. Violence is never the answer,
I’m not sure if it was intentional, but I appreciate the pun on saying your piece.
How many people exist who will be willing to buy at that price? Well, there are about 24 million people in the USA with a net worth of over a million dollars — about 40% of the millionaires, worldwide. As a back-of-the-envelope, order-of-magnitude guess, let’s say that there are about 50 million people who could reasonably afford Nectome’s services, that about 2% of these people die each year, and that half of those do so in a way that’s compatible with going to Oregon and getting MAiD — 500k potential clients per year. Even if only one-in-a-thousand people are open to it, philosophically, Nectome really could potentially be serving hundreds of clients per year, if they get really good at marketing. And, if they break the Overton window open, thousands per year is plausible.
I’ve run calculations like this before, and have thought, “Why are there not more cryonicists?” I think the sad reality is that many people who “should” be interested in it just have all sorts of rationalizations for their initial impression that it is weird or unnatural or could be worse than death. So I think your prior should be the rate of people already signing up for cryonics, and argue why Nectome is different. For the average person who might be interested, I don’t think it’s that different. That said, since you can fund this from life insurance with an early payout, I think it’s more affordable to people than your calculation suggests.
Somewhat related: Hanson argues in Age of Em there would be hundreds of unique ems to cover all the jobs, and they would all have a lot of training. But that is for peak performance.
Here’s a paper with islands as a natural experiments providing evidence that colonialism increased the well-being of poor countries.
I didn’t have the patience to jailbreak the scatological refusal of an LLM to produce an image of “elicit paradigm-shiting research work out of the AIs”, but someone else might want to.
It’s not both of these problems—either you do whole body cryopreservation (no decapitation) and no body at the funeral at all, or you do neuro preservation and you can have the rest of the body cremated and present at the funeral.
compatible with normal funerals, so dramatically lower spending of weirdness points and religious objections. I think this means it can scale about 2oom more than cryo at least.
I couldn’t quickly find the percentage of open casket funerals, but let’s say it is half. If we take the extreme case that right now cryonics is only adopted by people with families who would prefer a non-open casket funeral, then maybe that doubles the market? I think the main religious objections are that you are playing God, the soul doesn’t go back into the body, etc. Also, the MAiD requirement would cause a lot more religious objection. And in reality, many cryonicists sign up even if their relatives would prefer open casket. So I think the market would increase less than 100%, let alone 10,000% due to these factors.
Or “We’re All Medieval Kings.” More accurate but not quite on point. If you look at the human development index, a composite of life expectancy, education, and income, even people significantly below developed country poverty lines would be on par with medieval kings.
In New Zealand, many meat patties are cooked only on the outside, and one of the recommended methods of cooking is a microwave—I never saw that in the US.
Quick polls on AGI doom
With some space freed by classic alignment worries, we can focus on the world’s actual biggest problems. My top candidates (in no particular order):
What about nuclear war? I think a pre-emptive strike is plausible if one country may get power over the world with aligned AI.
I agree that scaling up ahead of time would be the best. One possibility might be convincing fluorescent bulb manufacturers to advocate for stricter indoor air quality standards, which could be partly met by converting production to UV. This could save the fluorescent factories from being shut down because fluorescents are going to be banned in a couple years in most applications.
But since we don’t have widespread use or stockpiles, I think we need to have a backup plan for fast scale up in case the pandemic hits soon.
Don’t ride motorcycles, avoid extreme sports, snow sports and mountaineering, beware long car rides. The younger you are, the more this likely holds.
Also don’t live in NATO cities because of nuclear war threat, and ideally live in places that would likely do better in an extreme pandemic, or be ready to relocate if one occurs.
I thought the diagram was very helpful. It looks like the integral of intensity and depth of live skin cells is about an order of magnitude higher for 254 nm (inexpensive and efficient mercury discharge). So you’re saying that the fact that far UVC is more strongly absorbed in the first 10 microns, that makes the inactivation rate of bacteria and viruses higher?
I’m not sure if this changes things, but the probabilities of the OP were reversed:
If there was a button that would kill me with a 60% probability and transport me into a utopia for billions of years with a 15% probability, I would feel very scared to press that button, despite the fact that the expected value would be extremely positive compared to living a normal life.
I feel your pain. After many rejections, I’ve managed to get about 10 papers through peer review on transformative AI, so it is possible! Honestly, I think publishing on resilience to nuclear winter is even worse. Best of luck!
However, a decade and a half after those first demo drives, Waymo has finally hit a point where the error rate is so low that it’s possible to pull the human safety monitor out of the car completely. Suddenly you have a new kind of post-labor business model that’s potentially much more valuable—an autonomous fleet that can run 24 hours a day with minimal labor costs and with perfectly consistent service and safe driving. This corresponds to the second bend in the graph.
They pulled the human safety monitor out of the car, but I think humans are still doing work remotely (each were monitoring 15-20 cars as of 2023 at Cruise). But that can still be consistent with minimal labor costs.
Here’s the equivalent poll for LessWrong. And here’s my summary:
“Big picture: the strongest support is for pausing AI now if done globally, but there’s also strong support for making AI progress slow, pausing if disaster, pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don’t want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon.”
Heuristic C: “If something has a >10% chance of killing everyone according to most experts, we probably shouldn’t let companies build it.”
IMO, it’s hard to get a consensus for Heuristic C at the moment even though it kind of seems obvious. It’s even hard for me to get my own brain to care wholeheartedly about this heuristic, to feel its full force, without a bunch of “wait, but …”.
Heuristic F: “Give serious positive consideration to any technology that many believe might save billions of lives.”
That’s a big consideration for short/medium termists. Could another heuristic (for the longtermists) be Maxipok (maximize the probability of an OK outcome)? By Bostrom’s definition of X risk, a permanent pause is an X catastrophe. So if one thought the probability of the pause becoming permanent was greater than p(X catastrophe|AGI), then a pause would not make sense. Even if one thought there were no chance of a pause becoming permanent, if one thought the background X risk per year was greater than the reduction in p(X risk|AGI) for every year of pause, it would also not make sense to pause from a longtermist perspective. Putting these together, it’s not clear that p(X risk|AGI) ~10% should result in companies not being allowed to build it (though stronger regulation could very well make sense).
Opening the Overton window would be great, but even endorsements of mainstream famous people like Larry King, Seth MacFarlane, Simon Cowell, Paris Hilton, and Britney Spears hasn’t seemed to help much.