Part-time research/content role at Atella (AI safety eval) -
We are Atella and we are building STELLA, an AI-safety harness that runs multi turn safety evals on high-stakes scenarios (i.e., suicidality, mandated reporting, grooming, etc). Roy Perlis (MGH/JAMA AI) is our Chief Scientist and our leaderboard is at leaderboard.atella.ai.
We are looking for someone to own our weekly content presence on our blog, summarizing our ongoing work as well as critically assessing safety research. An ideal fit would be someone who reads AI safety papers for fun and gets paid to write about them.
We want someone who has opinions about eval methodology, not a marketer.
What we’re looking for
+PhD student or recent grad in ML, AI safety, or adjacent field
+Already reads the literature
+Has written something publicly (blog, Twitter threads, anything)
Logistics
+5-10 hrs/week
+$1-2k/month depending on experience
+Remote, async
To apply: Send a link to something you’ve written, and one paragraph on what you think is the most interesting open problem in AI safety evaluation right now. Email kit@atella.ai
Part-time research/content role at Atella (AI safety eval) -
We are Atella and we are building STELLA, an AI-safety harness that runs multi turn safety evals on high-stakes scenarios (i.e., suicidality, mandated reporting, grooming, etc). Roy Perlis (MGH/JAMA AI) is our Chief Scientist and our leaderboard is at leaderboard.atella.ai.
We are looking for someone to own our weekly content presence on our blog, summarizing our ongoing work as well as critically assessing safety research. An ideal fit would be someone who reads AI safety papers for fun and gets paid to write about them.
We want someone who has opinions about eval methodology, not a marketer.
What we’re looking for
+PhD student or recent grad in ML, AI safety, or adjacent field
+Already reads the literature
+Has written something publicly (blog, Twitter threads, anything)
Logistics
+5-10 hrs/week
+$1-2k/month depending on experience
+Remote, async
To apply: Send a link to something you’ve written, and one paragraph on what you think is the most interesting open problem in AI safety evaluation right now. Email kit@atella.ai