Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
That’s reasonable, but it seems to be different from what these quotes imply:
There are a bunch of quotes like the above that make it sound like you are predicting progress will slow down in a few years. But instead you are saying that progress will continue, and AIs will become capable of doing more and more impressive tasks thanks to RL scaling, but they’ll require longer and longer CoTs to do those more and more impressive tasks? That’s very reasonable and less spicy / contrarian, I think most people would already agree with that.
I like your post on inference scaling reshaping AI governance. I think I agree with all the conclusions on the margin, but think that the magnitude of the effect will be small in every case and thus not change the basic strategic situation.
My own cached thought, based on an analysis I did in ’22, is that even though inference costs will increase they’ll continue to be lower than the cost of hiring a human to do the task. I suppose I should revisit those estimates...