Software engineer since 2010. Left Google in fall ’21, now getting into independent AI alignment research. Currently working on interpretability.
I’m always happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message.
I am also willing to take your bet for 2030.
I would propose one additional condition: If there evidence of a deliberate or coordinated slowdown on AGI development by the major labs, then the bet is voided. I don’t expect there will be such a slowdown, but I’d rather not be invested in it not happening.