Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
FWIW I don’t think Agent-5 needs to be vastly superhuman at politics to succeed in this scenario, merely top-human level. Analogy: A single humanoid robot might need to be vastly superhuman at fighting to take out the entire US army in a land battle. But a million humanoid robots could probably do it if they were merely expert at fighting. Agent-5 isn’t a single agent, it’s a collective of millions.