PhD in Geometric Group Theory >> Postdoc in Machine Learning >> Independent AI safety and AI alignment Research.
Looking for mentors in AI safety.
Please feel free to contact at shivamaroramath@gmail.com
Shivam
Karma: 2
An important work in AI safety should be to prove equivalency of various Capability benchmarks to Risk benchmarks. So that, when AI labs show their model is crossing a capability benchmark, they are automatically crossing a AI safety level.
“So we don’t have two separate reports from them; one saying that the model is a PhD level Scientist, and the other saying that studies shows that the CBRN risk with model is not more than internet search.”- Jun 10, 2025, 1:35 PM; 1 point) 's comment on AI companies’ eval reports mostly don’t support their claims by (
I completely agree and this is what I was thinking here in this short form, that we should have an equivalence of capability and risks benchmarks. As all AI companies try to beat the capability benchmark and fail the safety/risks benchmarks due to obvious bias.
The basis premise of this equivalency being that if the model is smart enough to beat x benchmark then it is good enough to construct or help with CBR attacks, especially given that they can not be guaranteed to be immune to jailbreaks.