RSS

Evan R. Murphy

Karma: 851

I’m an AI alignment researcher currently focused on myopia and language models. I’m also interested in interpretability and other AI safety-related topics. My research is independent and currently supported by a grant from the Future Fund regranting program*.

Research that I’ve authored or co-authored:

Other recent work:

Before getting into AI alignment, I was a software engineer for 11 years at Google and various startups. You can find details about my previous work on my LinkedIn.

I’m always happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!

--

*In light of the FTX crisis, I’ve set aside the grant funds I received from Future Fund and am evaluating whether/​how this money can be returned to customers of FTX who lost their savings in the debacle. In the meantime, I continue to work on AI alignment research using my personal savings. If you’re interested in funding my research or hiring me for related work, please reach out.