These sorts of assumptions are the default in the climate change literature. Few agricultural impacts studies account for crop migration or for the prospect that we might introduce new cultivars. I imagine the authors are following the lead of the climate literature there, though it obviously massively overstates the impact of cooling or warming
I also think this highlights a wider problem with the nuclear winter literature. The scholars in the field are very obviously biased. Robock and his collaborators write almost all of the papers but clearly have an agenda. Just take a look at Alan Robock’s website—flashing nuclear bomb gifs and pictures with Fidel Castro.
I followed Robock’s work in solar geoengineering and it was also clearly biased. He claimed that solar geoengineering would knock out the monsoons, but his paper actually showed that solar geo would reduce disruption to the monsoon. Researchers in the field expressed frustration about Robock’s meme, which he spread because he doesn’t like solar geo.
If you look at the nuclear winter literature in the 1980s, all of the scientists say “this is bullshit”. The only thing that changed with the second round of nuclear winter papers starting with Robock was that he used modern climate models. But the criticism was never about the climate model, it was about how much particulate matter would get into the atmosphere and how long it would stay there. So, the idea that modern science validated nuclear winter is just wrong.
A very prominent climate physicist told me that the assumptions in the Robock papers are turned up to maximise damage rather than to actually be plausible, and that people are scared to point this out because of the politicised nature of the debate on nuclear war
The smoke estimates in the Robock papers didn’t change despite a massive decline in the nuclear arsenal.
Thanks a lot for this! I think I agree with you to an extent that if we define alignment as avoiding human extinction due to rogue AI, the distinction between alignment and capabilities seems relatively clear, though I do have some reservations about that.
Independent of that, what do you make of the distinction between intent-alignment (roughly getting AI systems to do what we intend) and capabilities? If you look at many proposed intent-alignment techniques, they also seem to improve capabilities on standard metrics. This is true eg of RLHF, adversarial examples, chain of thought prompting, most/all robustness techniques etc. RLHF was proposed as an intent-alignment technique, and it made GPT-4 much more intent-aligned in the sense that its policy is more aligned with the intentions of programmers/users. This also made the system more useful and capable. I would expect AI-augmented feedback on RL to also improve intent-alignment and capabilities. Do you disagree with that line of argument?