I aim to promote welfare and reduce suffering as much as possible. This has led me to work on AGI safety research. I am particularly interested in foundation model agents (FMAs): systems like AutoGPT and Operator that equip foundation models with memory, tool use, and other affordances so they can perform multi-step tasks autonomously.
Previously, I completed an undergrad in CS and Math at Columbia, where I helped run Columbia Effective Altruism and Columbia AI Alignment Club (CAIAC).
It could be interesting to try that too, but we thought other reasoning models are more likely to predict similar things to the large reasoning models generating the CoTs in the first place. That hopefully increases the signal to noise ratio.