Hi, I’m Rohan! I aim to promote welfare and reduce suffering as much as possible for all sentient beings, which has led me to work on AGI safety research. I am particularly interested in foundation model agents (FMAs): systems like AutoGPT and Operator that equip foundation models with memory, tool use, and other affordances so they can perform multi-step tasks autonomously.
I am the founder of Aether, an independent research lab focused on foundation model agent safety. I’m also a PhD student at the University of Toronto, where I am supervised by Professor Zhijing Jin and continue to run Aether. Previously, I completed an undergrad in CS and Math at Columbia, where I helped run Columbia Effective Altruism and Columbia AI Alignment Club (CAIAC). I have done research internships with AI Safety Hub Labs (now LASR Labs), UC Berkeley’s Center for Human-Compatible AI (CHAI), and the ML Alignment & Theory Scholars (MATS) program.
I love playing tennis, listening to rock and indie pop music, playing social deduction games, reading fantasy books, watching a fairly varied set of TV shows and movies, and playing the saxophone, among other things.
Just went on my first podcast! It was a pleasure to discuss continual learning for LLM agents and its safety implications with Anna on The Glitchatorio, you can check it out at either of the below links:
Spotify: https://open.spotify.com/episode/4cU4aSMSgmstzPJ9ijglvc?si=DLPPfXbKSk233azzIspSkQ&t=218
Apple Podcasts: https://podcasts.apple.com/de/podcast/the-glitchatorio/id1836777868?l=en-GB&i=1000747658363