Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
I’ve bought it and plan to read it, you are the second person to recommend it to me recently.
Curious about how you think it’s more realistic than AI 2027? AI 2027 doesn’t really feature any superpersuasion, much less intelligence-as-mind-control. It does have bioweapons but they aren’t at all important to the plot. As for boiling the oceans… I mean I think that’ll happen in the next decade or two unless the powers that be decide to regulate economic growth to prevent it, which they might or might not.