Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
Plea Addressed to the Hypebusters:
Here’s what [Insert evil tech CEO here] wants:
--He wants potential investors to think that AI is the next internet or electricity—like, he wants them to think that it’ll automate half the jobs in the economy and make them trillionaires if they invest now.
--He wants his loyal lieutenants to think AI will reach superintelligent levels in the next few years, so they can help him plan out his moves.
--He wants his researchers to think that also, because they are the ones who need to do the research to get there & also the research to make sure the superintelligences are obedient to the company (i.e. to him).
--He wants everyone else (the public, Congress, etc.) to think it’s all just hype, so that they don’t interfere.
You might think you are Fighting the Good Fight by loudly shouting “it’s all hype.” However, the AI company employees don’t give a shit what you say, and neither do the investors. So you are playing right into [Insert evil tech CEO here]’s hands.