Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
I wouldn’t classify that as a weird side channel btw, that was in fact exactly one of the cases I had in mind back in ’23 when I was going around telling everyone about the importance of not training on the CoT.
I agree that the companies are currently incompetent at paying the associated safety tax, as evidenced recently by Mythos system card lmao.
However I think it would be great if they got better at it and committed to paying the tax.