- No one has communicated AI xrisk to the public debate yet. In reality, Elon Musk, Nick Bostrom, Stephen Hawking, Sam Harris, Stuart Russell, Toby Ord and recently William MacAskill have all sought publicity with this message. There are op-eds in the NY Times, Economist articles, YouTube videos and Ted talks with millions of views, a CNN item, at least a dozen books (including for a general audience), and a documentary (incomplete overview here). AI xrisk communication to the public debate is not new. However, the public debate is a big place and when compared to e.g. climate, coverage of AI xrisk is still minimal (perhaps a few articles per year in a typical news outlet, compared to dozens to hundreds for climate). - AI xrisk communication to the public debate is easy, we could just ‘tell people’. If you actually try this, you will quickly find out public communication, especially of this message, is a craft. If you make a poor quality contribution or your network is insufficient, it will probably never make it out. If your message does make it out, it will probably not be convincing enough to make most media consumers believe AI xrisk is an actual thing. It’s not necessarily easier to convince a member of the general public of this idea than it is to convince an expert, and we can see from the case of Carmack and many others how difficult this can be. Arguably, LW and EA are the only places where this has really been successful so far. - AI xrisk communication is really dangerous and it’s easy to irreversibly break things. As can easily be seen from the wealth of existing communication and how little that did, it’s really hard to move the needle significantly on the topic. That cuts both ways: it’s, fortunately, not easy to really break something with your first book or article, simply because it won’t convince enough people. That means there’s some room to experiment. However, it’s also, unfortunately, fairly hard to make significant progress here without a lot of time, effort, and budget.
We think communication to the public debate is net positive and important, and a lot of people could work on this who could not work on AI alignment. There is an increasing amount of funding available as well. Also, despite the existing corpus, the area is still neglected (we are to our knowledge the only institute that specifically aims to work on this issue).
If you want to work on this, we’re always available for a chat to exchange views. EA is also starting to move in this direction, good to compare notes with them as well.
This is what we are doing with the Existential Risk Observatory. I agree with many of the things you’re saying.
I think it’s helpful to debunk a few myths:
- No one has communicated AI xrisk to the public debate yet. In reality, Elon Musk, Nick Bostrom, Stephen Hawking, Sam Harris, Stuart Russell, Toby Ord and recently William MacAskill have all sought publicity with this message. There are op-eds in the NY Times, Economist articles, YouTube videos and Ted talks with millions of views, a CNN item, at least a dozen books (including for a general audience), and a documentary (incomplete overview here). AI xrisk communication to the public debate is not new. However, the public debate is a big place and when compared to e.g. climate, coverage of AI xrisk is still minimal (perhaps a few articles per year in a typical news outlet, compared to dozens to hundreds for climate).
- AI xrisk communication to the public debate is easy, we could just ‘tell people’. If you actually try this, you will quickly find out public communication, especially of this message, is a craft. If you make a poor quality contribution or your network is insufficient, it will probably never make it out. If your message does make it out, it will probably not be convincing enough to make most media consumers believe AI xrisk is an actual thing. It’s not necessarily easier to convince a member of the general public of this idea than it is to convince an expert, and we can see from the case of Carmack and many others how difficult this can be. Arguably, LW and EA are the only places where this has really been successful so far.
- AI xrisk communication is really dangerous and it’s easy to irreversibly break things. As can easily be seen from the wealth of existing communication and how little that did, it’s really hard to move the needle significantly on the topic. That cuts both ways: it’s, fortunately, not easy to really break something with your first book or article, simply because it won’t convince enough people. That means there’s some room to experiment. However, it’s also, unfortunately, fairly hard to make significant progress here without a lot of time, effort, and budget.
We think communication to the public debate is net positive and important, and a lot of people could work on this who could not work on AI alignment. There is an increasing amount of funding available as well. Also, despite the existing corpus, the area is still neglected (we are to our knowledge the only institute that specifically aims to work on this issue).
If you want to work on this, we’re always available for a chat to exchange views. EA is also starting to move in this direction, good to compare notes with them as well.
Thank you very much for this response!