Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
What? I think the opposite is true. Absent capability restraint, the situation is going to get very intense. China, for example, will be rightly fearful of what the US will do to them if the US does an intelligence explosion.
A more abstract argument I don’t buy nearly as much: AGI, being smarter and more numerous and thinking faster than humans, will be like accelerating history itself. A century will pass in a decade, or maybe in a year, or maybe in a month, depending on takeoff speeds. Therefore, a century’s worth of turmoil and conflicts will happen during that time as well.