Student and co-director of the AI Safety Initiative at Georgia Tech. Interested in technical safety/alignment research and general projects that make AI go better. More about me here.
yix
yix’s Shortform
TastyBench: Toward Measuring Research Taste in LLM
More on giving undergrads their first research experience. Yes, giving first research experience is high impact, but we want to reserve these opportunities to the best people. Often, this first research experience is most fruitful when they work with a highly competent team. We are turning focus to assemble such teams and find fits for the most value aligned undergrads.
We always find it hard to form pipelines because individuals are just so different! I don’t even feel comfortable using ‘undergrad’ as a label if I’m honest…
Lessons from a year of university AI safety field building
Thanks again Esben for collaborating with us! Can confidently say that the above is super valuable advice for any AI safety hackathon organizers, they’re consistent with our experiences.
In the context of a college campus hackathon, I’d especially stress focus on preparing starter materials and making submission requirements clear early on!
Does anyone know of a convincing definition of ‘intent’ in LLMs (or a way to identify it)? In model organisms type work, I find it hard to ‘incriminate’ LLMs. Even though the output of the LLM will remain what it is regardless of ‘intent’, I think this distinction may be important because ‘intentionally lying’ and ‘stochastic parroting’ should scale differently with overall capabilities.
I find this hard for several reasons, but I’m highly uncertain whether these are fundamental limitations:
LLMs behave very differently depending on context. Asking it about something it did post-hoc elicits a different ‘mode’ and doesn’t necessarily allow us to make statements about its original behavior.
Mechanistic techniques seem to be good at generating hypotheses, not validating them. Pointing at a SAE feature activation that says ‘deception’ does not seem conclusive because auto-interp pipelines often does not include enough context for robust explanations for complex high level behaviors like deception.