given that we live on a planet that includes climate change, over ten thousand nuclear weapons, and Vladimir Putin.
Affirming the popular belief that Putin is somehow equivalent to “ten thousand nuclear weapons” conveys naivety about geopolitics, the kind that will be noticed by any reader familiar with geopolitics or government or nuclear weapons. Joking about that also conveys naivety, albeit a somewhat different kind. People who work in and around that sector are not supposed to be influenced by anything that looks remotely like propaganda, regardless of what the source seems to be and what side seems like they would be the one pushing it. At minimum, mudslinging against famous world leaders will be seen as unprofessionally getting involved with systems and forces that the author does not understand.
Either way, it indicates to the reader that either this is meant exclusively for people who are naive about very important information about how the world works, or it indicates that both the author and the readers are naive, in a way that is taken extremely seriously by extremely influential people. If you only want to appeal to random programmers etc. then I don’t see any issue with it, but people involved with corporate or government decisions are probably as worth appealing to as well.
That line was intended to (mildly humorously) make the point that we realise and are aware that there are many other serious risks in the popular imagination. Our central point is that AI x-risk is grand civilisational threat #1, so we wanted to lead with that, and since people think many other things are potential civilisational catastrophes (if not x-risks) we thought it made sense to mention those (and also implicitly put AI into the reference class of “serious global concern”). We discussed, and got feedback from several others, on this opener and while there was some discussion we didn’t see any fundamental problem with it. The main consideration for keeping it was that we prefer specific and even provocative-leaning writing that makes its claims upfront and without apology (e.g. “AI is a bigger threat than climate change” is a provocative statement; if that is a relevant part of our world model, seems honest to point that out).
The general point we got from your comment is that we judged the way the tone of it comes across very wrongly. Thanks for this feedback; we’ve changed it. However, we’re confused about the specifics of your point, and unfortunately haven’t acquired any concrete model of how to avoid similar errors in the future apart from “be careful about the tone of any statements that even vaguely imply something about geopolitics”. (I’m especially confused about how you got the reading that we equated the threat level from Putin and nuclear weapons, and it seems to me that the extent that it is “mudslinging” or “propaganda” seems to be the extent to which acknowledging that many people think Putin is a major threat is either of those things.)
In addition to the general tone, an additional thing we got wrong here was not sufficiently disambiguating between “we think these other things are plausible [or, in your reading, equivalent?] sources of catastrophe, and therefore you need a high bar of evidence before thinking AI is a greater one”, versus “many people think these are more concrete and plausible sources of catastrophe than AI”. The original intended reading was “bold” as in “socially bold, relative to what many people think”, and therefore making points only about public opinion.
Correcting the previous mistake might have looked like:
“If human civilisation is destroyed this century, the most likely cause is advanced AI systems. This might sound like a bold claim to many, given that we live on a planet full of existing concrete threats like climate change, over ten thousand nuclear weapons, and Vladimir Putin”
Based on this feedback, however, we have now removed any comparison or mention of non-AI threats. For the record, the entire original paragraph is:
If human civilisation is destroyed this century, the most likely cause is advanced AI systems. This is a bold claim given that we live on a planet that includes climate change, over ten thousand nuclear weapons, and Vladimir Putin. However, it is a conclusion that many people who think about the topic keep coming to. While it is not easy to describe the case for risks from advanced AI in a single piece, here we make an effort that assumes no prior knowledge. Rather than try to argue from theory straight away, we approach it from the angle of what computers actually can and can’t do.
I just want to clarify that referencing Vladimir Putin works very well for explaining x-risk/vulnerable world hypothesis/inadequate equilibria to most people. I have done it and it is often very helpful.
In DC, talking like that is hazardous to one’s career, and I made that mistake a couple times in my first year in DC. People in DC in general should worry about talking about things that seem popular on the internet; I’ve had people decide they didn’t want to talk to me (blank stare) after I mentioned Big Data, because to them it was just a buzzword used by people who don’t know what they’re talking about. That’s an extreme case and the best rule of thumb is to avoid talking about politicians or political parties, especially strong emotions regarding political parties or politicians.
People involved with corporate and government decisions don’t have time to deal with existential risks but are busy gaining and holding on to power. This article is for advisors and low level engineers.
Affirming the popular belief that Putin is somehow equivalent to “ten thousand nuclear weapons” conveys naivety about geopolitics, the kind that will be noticed by any reader familiar with geopolitics or government or nuclear weapons. Joking about that also conveys naivety, albeit a somewhat different kind. People who work in and around that sector are not supposed to be influenced by anything that looks remotely like propaganda, regardless of what the source seems to be and what side seems like they would be the one pushing it. At minimum, mudslinging against famous world leaders will be seen as unprofessionally getting involved with systems and forces that the author does not understand.
Either way, it indicates to the reader that either this is meant exclusively for people who are naive about very important information about how the world works, or it indicates that both the author and the readers are naive, in a way that is taken extremely seriously by extremely influential people. If you only want to appeal to random programmers etc. then I don’t see any issue with it, but people involved with corporate or government decisions are probably as worth appealing to as well.
That line was intended to (mildly humorously) make the point that we realise and are aware that there are many other serious risks in the popular imagination. Our central point is that AI x-risk is grand civilisational threat #1, so we wanted to lead with that, and since people think many other things are potential civilisational catastrophes (if not x-risks) we thought it made sense to mention those (and also implicitly put AI into the reference class of “serious global concern”). We discussed, and got feedback from several others, on this opener and while there was some discussion we didn’t see any fundamental problem with it. The main consideration for keeping it was that we prefer specific and even provocative-leaning writing that makes its claims upfront and without apology (e.g. “AI is a bigger threat than climate change” is a provocative statement; if that is a relevant part of our world model, seems honest to point that out).
The general point we got from your comment is that we judged the way the tone of it comes across very wrongly. Thanks for this feedback; we’ve changed it. However, we’re confused about the specifics of your point, and unfortunately haven’t acquired any concrete model of how to avoid similar errors in the future apart from “be careful about the tone of any statements that even vaguely imply something about geopolitics”. (I’m especially confused about how you got the reading that we equated the threat level from Putin and nuclear weapons, and it seems to me that the extent that it is “mudslinging” or “propaganda” seems to be the extent to which acknowledging that many people think Putin is a major threat is either of those things.)
In addition to the general tone, an additional thing we got wrong here was not sufficiently disambiguating between “we think these other things are plausible [or, in your reading, equivalent?] sources of catastrophe, and therefore you need a high bar of evidence before thinking AI is a greater one”, versus “many people think these are more concrete and plausible sources of catastrophe than AI”. The original intended reading was “bold” as in “socially bold, relative to what many people think”, and therefore making points only about public opinion.
Correcting the previous mistake might have looked like:
Based on this feedback, however, we have now removed any comparison or mention of non-AI threats. For the record, the entire original paragraph is:
I just want to clarify that referencing Vladimir Putin works very well for explaining x-risk/vulnerable world hypothesis/inadequate equilibria to most people. I have done it and it is often very helpful.
In DC, talking like that is hazardous to one’s career, and I made that mistake a couple times in my first year in DC. People in DC in general should worry about talking about things that seem popular on the internet; I’ve had people decide they didn’t want to talk to me (blank stare) after I mentioned Big Data, because to them it was just a buzzword used by people who don’t know what they’re talking about. That’s an extreme case and the best rule of thumb is to avoid talking about politicians or political parties, especially strong emotions regarding political parties or politicians.
People involved with corporate and government decisions don’t have time to deal with existential risks but are busy gaining and holding on to power. This article is for advisors and low level engineers.