Which is what I hope agent foundations is about. Something useful for AI safety. Something useful in practice. If it’s a cute branch of essentially-math that doesn’t necessarily concern itself with saving the world from AI doom, why should anyone give you any money or status in the AI safety community?
Separating this response out for visibility—it is unequivocally, 100% my goal to reduce AI x-risk. The entire purpose of my research is to eventually apply it in practice.
I believe you, and I want to clarify that I did not (and do not) mean to imply otherwise. I also don’t mean to imply you shouldn’t get money or status; quite the opposite.
It’s just the post itself[1] that doesn’t make the whole “agent foundations is actually for solving AI x-risk” thing click for me.
Separating this response out for visibility—it is unequivocally, 100% my goal to reduce AI x-risk. The entire purpose of my research is to eventually apply it in practice.
I believe you, and I want to clarify that I did not (and do not) mean to imply otherwise. I also don’t mean to imply you shouldn’t get money or status; quite the opposite.
It’s just the post itself[1] that doesn’t make the whole “agent foundations is actually for solving AI x-risk” thing click for me.
And other posts on LW trying to explain this