That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
I do not (yet) know that Nye resource so I don’t know if I endorse it. I do endorse the more general idea that many folks who understand the basics of AI x-risk could start talking more to their not-yet-clued-in friends and family about it.
I think in the past, many of us didn’t bring this up with people outside the bubble for a variety of reasons: we expected to be dismissed or misunderstood, it just seemed fruitless, or we didn’t want to freak them out.
I think it’s time to freak them out.
And what we’ve learned from the last seven months of media appearances and polling is that the general public is actually far more receptive to x-risk arguments than we (at MIRI) expected; we’ve been accustomed to the arguments bouncing off folks in tech, and we over-indexed on that. Now that regular people can play with GPT-4 and see what it does, discussion of AGI no longer feels like far-flung science fiction. They’re ready to hear it, and will only get more so as capabilities demonstrably advance.
We hope that if there is an upsurge of public demand, we might get regulation/legislation limiting the development and training of frontier AI systems and the sales and distribution of the high-end GPUs on which such systems are trained.